<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Code Runner]]></title><description><![CDATA[Writes code, runs, sometimes sleeps]]></description><link>https://coderunner.io/</link><generator>Ghost 2.37</generator><lastBuildDate>Sun, 05 Apr 2026 20:12:55 GMT</lastBuildDate><atom:link href="https://coderunner.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[How to migrate a paid Wordpress.com blog to Ghost, completely free!]]></title><description><![CDATA[Recently I found myself wanting to move a blog I had over on Wordpress.com to Ghost. I figured it would be a fairly simple process, but as is often the case it turned out a little more involved. Not least because I wanted to avoid having to pay almost $300 to be able to do it...!]]></description><link>https://coderunner.io/migrate-a-blog-from-paid-wordpress-to-ghost-for-free/</link><guid isPermaLink="false">5e2db8b81784190001d569b0</guid><category><![CDATA[ghost]]></category><category><![CDATA[docker]]></category><category><![CDATA[wordpress]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Sun, 02 Feb 2020 18:20:37 GMT</pubDate><media:content url="https://coderunner.io/content/images/2020/02/wordpress-ghost-logos-border.png" medium="image"/><content:encoded><![CDATA[<img src="https://coderunner.io/content/images/2020/02/wordpress-ghost-logos-border.png" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"><p>Recently I found myself wanting to move a blog I had over on Wordpress.com to Ghost. I figured it would be a fairly simple process, but as is often the case it turned out a little more involved. Not least because I wanted to avoid having to pay almost $300 to be able to do it...!</p><p>Back in 2017 I setup a <a href="https://wanderersoftheworld.com">travel blog</a> to capture some of my adventures with my girlfriend as we moved around the world. </p><p>For ease of use I opted for a paid Wordpress plan, even though I was already self-hosting <em>this</em> blog using Ghost at the time. My thinking was that I needed something that was simple enough to not require any special tweaking or setup, and allow me to write on the go using only an iPad. Wordpress had a handy app for that, so sans laptop but coupled with a BlueTooth keyboard, that's what I did.</p><p>I was never happy with it though. Wordpress to me is just too bloated for a simple, bog-standard blogging platform. What's more, the basic paid hosting plan is quite limited, not allowing you to change your theme aside from a handful of options, none of which looked very good. Oh, and it was <em>slow</em>. </p><p>It may power <a href="https://venturebeat.com/2018/03/05/wordpress-now-powers-30-of-websites/">30% of the Internet</a>, but I'd rather use something else. </p><p>So, when my subscription recently came up for renewal, I figured it was as good a time as any to spend some time migrating it to Ghost, and hosting it in <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">the same way I host this one</a>. </p><p>This should be easy I thought, as after all there is an <a href="https://ghost.org/faq/migrating-from-wordpress-to-ghost/">entire guide</a> to do exactly this, using an official <a href="https://wordpress.org/plugins/ghost/">Ghost plugin for Wordpress</a>. Great!</p><p>Except, when I tried to install the plugin, I was greeted with:</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2020/01/Screenshot-2020-01-26-at-16.50.51.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"><figcaption>Hmm, okay... how much?</figcaption></figure><!--kg-card-end: image--><p>Okay, maybe it's worth it to upgrade just to do this migration. After all once it's done, I'll no longer need a paid plan at all so it'll be worth it in the long run, <em>eventually</em>.</p><p>Let's upgrade!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2020/01/Screenshot-2020-01-26-at-16.52.30.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"><figcaption>Ok, maybe not</figcaption></figure><!--kg-card-end: image--><p>Ouch! Maybe it's just me, but I'm not really down with paying $296 just to move a few blog posts over from one place to another.</p><p>There's only a handful of posts, so I <em>could</em> have just recreated them all in Ghost manually, copying over the content, tweaking the formatting, and uploading the images again. But, that's not the developer way, and also, what if I <em>did</em> have hundreds or more to migrate?</p><p>Instead I figured there must be a way to do this all for free, so I set about finding it. My goals were:</p><ol><li>Migrate all of the post content, including images</li><li>Migrate all of the post comments from the Wordpress built in system to Disqus</li><li>Drop the date suffix from every permalink. (I had this setting on Wordpress turned on and regretted it). But, ensure the original URLs still work after everything is migrated, too!</li><li>Make it all SSL enabled when self-hosted (this comes as standard on Wordpress)</li><li>Do it all for free!</li></ol><p>This post is a step-by-step guide of how I did all of the above. As always, if you have any comments, questions, or feedback leave a comment below!</p><p>I have split this up into three parts:</p><ul><li>Part 1: How to migrate a paid Wordpress blog to Ghost, completely free! (this post)</li><li>Part 2: Migrate comments from Wordpress to a Ghost blog with Disqus (coming soon)</li><li>Part 3: Setup SSL for a free on a Dockerised Ghost blog with Let's Encrypt (coming soon)</li></ul><p>So are you ready? Grab a cup of tea and let's get started!</p><h2 id="steps-involved">Steps involved</h2><p>Let's start with a high-level overview of what we're going to do, so it's clear in our minds as we progress through each step:</p><ol><li>First, we will setup a Wordpress install locally, and import the content from our paid plan. The open-source version does not have any of those pesky limitations with installing plugins</li><li>Next, we'll install the Ghost plugin to our local installation. We'll then be able to export an archive of our blog in ghost format</li><li>Now we can setup a normal Ghost install, and import the archive we just exported</li><li>At this point, we have our blog content and images moved over, but we still have a little bit of cleanup to do. We'll tweak the CSS and setup redirects so the old permalinks still work. While we're at it, we'll purge any images that we no longer need, to keep the total size down</li><li>We'll setup Disqus, and migrate over the comments</li><li>Finally, we will setup SSL using Let's Encrypt</li></ol><p>At that point we should have everything replicated on Ghost, and it's just a case of changing the DNS entries to point there instead of Wordpress. </p><p>Okay, time to get stuck in!</p><h2 id="setup-wordpress-locally-in-docker-with-docker-compose">Setup Wordpress locally in Docker with Docker Compose</h2><p>Wordpress is actually open-source, and lives on Github <a href="https://github.com/WordPress/WordPress">here</a>. When someone talks about the self-hosted version, they are referring to <a href="https://wordpress.org/">https://wordpress.org/</a>. The paid version, by comparison, lives at <a href="https://wordpress.com/">https://wordpress.com/</a>. </p><p>Anyone can use Wordpress for free if they're willing to host it themselves, or you can pay someone else to host it for you. And one of those companies you can pay to do it for you is... Wordpress.com. It was setup by the co-founder of Wordpress Matt Mullenweg, but is pretty much the same as any other hosting service that happens to use the open-source software underneath.</p><p>What you get with the paid version is automatic upgrades, hosting, support, built-in backups and things like that. Basically, you simply pay and then create. The downside is that you are limited in customising and tweaking things and have less control, which is only somewhat alleviated by upgrading to one of the more expensive plans. </p><blockquote>For more on the difference between self-hosted and paid Wordpress, check out this post: <a href="https://www.wpbeginner.com/beginners-guide/self-hosted-wordpress-org-vs-free-wordpress-com-infograph/">https://www.wpbeginner.com/beginners-guide/self-hosted-wordpress-org-vs-free-wordpress-com-infograph/</a></blockquote><p>So, because Wordpress.org and Wordpress.com use the same software underneath, we can spin up our own local installation on our laptop, and just import our data.</p><p>One of the simplest ways to do that is via Docker. There is an <a href="https://hub.docker.com/_/wordpress/">official image</a> for Wordpress, which is a great start. Then it just needs to be paired with a suitable mysql database. And of course, there's an <a href="https://hub.docker.com/_/mysql">image for that</a> too.</p><blockquote>If you don't know much about Docker, you can still follow along but it would be worth reading up on the basics. You can install it for your system <a href="https://docs.docker.com/install/">here</a>, and read more <a href="https://docs.docker.com/get-started/">here</a>.</blockquote><p>We could bring up these containers manually, but it's a bit easier to use <a href="https://docs.docker.com/compose/">Docker Compose</a>. This will handle bringing up both the blog and database containers and wiring them together. </p><p>Create a new directory somewhere, like <code>wordpress_local</code>, and then create a <code>docker-compose.yml</code> file inside which looks like this:</p><!--kg-card-begin: code--><pre><code class="language-yaml">version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}
</code></pre><!--kg-card-end: code--><p>There's not much clever stuff going on here. We just setup some environment variables so that everything can talk nicely together. The <code>WORDPRESS_DB_USER</code> corresponds to the <code>MYSQL_USER</code> and similarly for the others variables.</p><blockquote>We won't worry about the security of the passwords or anything for this, as this setup is a throwaway. Once we've got our data out, we can just destroy the stack</blockquote><p>Now open a terminal in the directory you created, and fire up Wordpress locally with:</p><!--kg-card-begin: code--><pre><code class="language-bash">docker-compose up -d</code></pre><!--kg-card-end: code--><p>You'll see it starting up:</p><!--kg-card-begin: code--><pre><code class="language-bash">Creating network "wordpress_wanderers_default" with the default driver
Creating wordpress_wanderers_db_1 ... done
Creating wordpress_wanderers_wordpress_1 ... done</code></pre><!--kg-card-end: code--><p>Not if you hit <a href="http://localhost:8000">http://localhost:8000</a> in your browser, you'll get the Wordpress setup page:</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-01-at-16.27.47.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"><figcaption>Wordpress setup screen</figcaption></figure><!--kg-card-end: image--><p>At this point you can just complete the wizard normally. Don't worry too much about the details, as again this is just a throwaway install.</p><p>Once you have created a user you should be able to login, and you will be at the Wordpress dashboard, just like on the paid hosting. Now to bring over our blog content!</p><h3 id="exporting-our-paid-wordpress-blog-and-reimporting-locally">Exporting our paid Wordpress blog and reimporting locally</h3><p>In your paid Wordpress dashboard, generate an export of all your content by going to '<em>Tools -&gt; Export</em>'. You should export both the content archive as well as the media. Save both to the same folder you created earlier.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-01-at-16.35.41.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"><figcaption>One click export, pretty handy</figcaption></figure><!--kg-card-end: image--><p>Back in our local Wordpress dashboard, we can now just head over to '<em>Tools-&gt; Import</em>', and click to install the importer:</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-01-at-16.39.05.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>Once done, '<em>Install Now</em>' will change to '<em>Run importer</em>', where you can then import the content archive you saved.</p><blockquote>The content archive is the single <code>xml</code> file, inside the first <code>zip</code> archive</blockquote><p>You'll need to map every user from your paid blog to one on the local one. If like me that's just the same single user, it's pretty straightforward. You can also tick the option to '<em>Download and import file attachments</em>' which will try to retrieve the images automatically, so we can leave the media archive we downloaded just as a backup if this does not work.</p><p>Let that sit tight and import everything. Depending on how much content you have, it could take a little while.</p><blockquote>If you get a message for some items that they failed to import, then you may have some custom plugins on your paid install that add different post types etc. In this case you might need to install them locally too/add some tweaks before those items will import. I had this <a href="https://wordpress.stackexchange.com/questions/252071/when-importing-failed-to-import-invalid-post-type-feedback">problem with feedback posts</a>, from the JetPack plugin. </blockquote><p>Now if you hit http://localhost:8000 again, you should see your posts (the theme is probably ugly, but we've moving it all to ghost anyway so we'll tidy it all up there).</p><h3 id="exporting-to-ghost">Exporting to Ghost</h3><p>Time to move everything to Ghost. First, install and activate the plugin. Now we're hosting Wordpress ourselves, we can install anything we want!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-17.08.04.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>Then also install the '<em>Categories and Tag Converter</em>' under '<em>Tools -&gt; Import</em>'. We'll need this to map Wordpress categories to Ghost tags.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-17.55.10.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>Once the latter is installed, we can run the tag conversion. Check all, then hit that convert button.</p><p>Finally, we can now run the Ghost export plugin and download a zip archive of all of our blog in Ghost format, nice!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-17.57.53.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>Great, so we now have something we can import into a fresh Ghost installation. At this point you could find a paid hosting provider that sets everything up for you, or you could run your own stack as I do using Docker. A walkthrough for that is <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">here</a>, if you're interested. </p><blockquote>It doesn't matter how or where your Ghost blog is hosted, so pick whichever makes most sense for you.</blockquote><p>Now when you have your new install ready, you can head over to the Labs page and import your archive.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-18.06.52.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>So now we've managed to get all our content migrated over from Wordpress to Ghost, and all for free. So far so good!</p><h3 id="updating-the-slugs-to-remove-the-dates">Updating the slugs to remove the dates</h3><blockquote>This step is something you can skip if it's not relevant to you. </blockquote><p>When I setup my blog on Wordpress, I had the settings checked to create the slug for each page using the date it was published, like this: <a href="https://wanderersoftheworld.com/2018/12/10/24-hours-on-a-bus-journey-to-vietnam/">https://wanderersoftheworld.com/2018/12/10/24-hours-on-a-bus-journey-to-vietnam/</a>. </p><p>In my mind it makes content look dated, is cluttered, and possibly could affect <a href="https://nathanieltower.com/how-and-why-to-remove-dates-from-your-permalinks/">search engine ranking and traffic</a> too. </p><p>We can change it, but what about any references kicking around on the Internet or that people have bookmarked, how do we ensure those will still work? Fortunately it is very easy to do this using Ghost. We can upload a URL redirect map, which says for a given <code>from</code> url what it should be mapped <code>to</code>. And even better, it supports regular expressions. This means that we can:</p><ol><li>Remove the date from the <code>Post URL</code> section of each posts settings</li><li>Add a redirect from the old link to the new one, so that anyone (or any search engine) that uses it will still find your post</li></ol><p>The mapping we need is as simple as a single <code>redirects.json</code> file you can create:</p><!--kg-card-begin: code--><pre><code class="language-json">[{"from":"^/\\d{4}/\\d{2}/\\d{2}/(.*)","to":"/$1","permanent":true}]</code></pre><!--kg-card-end: code--><blockquote>If your date format is slightly different, you might need to tweak the regex</blockquote><p>With this we match any url starting with <code>/yyyy/mm/dd/x</code> and replace it with just <code>x</code>. </p><p>All you need to do is upload this file and voila, hitting one of the old links will automatically map to the same without the date. Great!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-20.32.15.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><blockquote>For more detailed info on redirects, see the <a href="https://ghost.org/tutorials/implementing-redirects/">Ghost docs on it</a></blockquote><p>What's next? Well you will probably find that while out of the box things look <em>fairly</em> good, you may want to make a few visual tweaks. </p><p>Let's do that now!</p><h3 id="tweaking-the-css-and-styling">Tweaking the CSS and styling</h3><p>The default theme for Ghost is Casper, which is nice and clean and often good enough. This blog uses it too, aside from a few small tweaks I made. One way to make those tweaks is to <a href="https://github.com/TryGhost/Casper">fork the theme</a> (it's open source after all), after which you can make any modifications you like before packaging it up and importing it back to your blog.</p><p>That's great, but is there a quicker, easier way if you just want to make a few, small tweaks here and there? Yes. Via code-injections.</p><p>Ghost includes out of the box the ability to add custom CSS/JavaScript code to both the entire blog and even to individual pages. This can be incredibly powerful!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-19.20.14.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>To do this site-wide, just go to '<em>Settings -&gt; Code Injection</em>'. Now you can add arbitrary code to the header or footer on each page. Below are some examples of things you could easily change.</p><blockquote>Add any custom CSS between <code>&lt;style&gt;</code> and <code>&lt;/style&gt;</code> tags, as shown in the first example</blockquote><p><strong>Add beautiful, free fonts</strong></p><p>Found a nice font over at <a href="https://fonts.google.com/">Google Fonts</a>? You could add it and use it with something like:</p><!--kg-card-begin: code--><pre><code class="language-css">&lt;style&gt;
@import url('https://fonts.googleapis.com/css?family=Amatic+SC&amp;display=swap');

/* Make title a nicer font (and bigger) */  
.site-title {
    font-family: 'Amatic SC', cursive;
    font-size: 8rem;
}
&lt;/style&gt;</code></pre><!--kg-card-end: code--><p><strong>Remove the social media and TryGhost links</strong></p><p>If you would rather not have them (like me), just hide them:</p><!--kg-card-begin: code--><pre><code class="language-css">/* Remove social links in top-right */    
.social-links, .floating-header-share {
  display: none;
}

.site-footer-nav a[href="https://twitter.com/tryghost"] {
  display: none;
}</code></pre><!--kg-card-end: code--><p><strong>Change the colour of the progress bar</strong></p><!--kg-card-begin: code--><pre><code class="language-css">/* Override the default colour of the progress bar */      
progress::-webkit-progress-value { background-color: #98b4de !important; }
progress::-moz-progress-bar {background-color: #98b4de !important;}
progress {color: #98b4de;}</code></pre><!--kg-card-end: code--><p><strong>Add an Instagram link at the bottom</strong></p><p>This one requires a bit of custom <code>JavaScript</code>, best placed in the footer:</p><!--kg-card-begin: code--><pre><code class="language-js">&lt;script&gt;
// Remove social-links for facebook and twitter, replace with instagram
$(".site-footer-nav").empty()
$(".site-footer-nav").append('&lt;a href="http://instagram.com/yourusername" target="_blank" rel="noopener"&gt;Instagram&lt;/a&gt;')
&lt;/script&gt;</code></pre><!--kg-card-end: code--><p><strong>Change the position of an image on an individual blog post</strong></p><p>Maybe after doing the import, you find that just a handful of the cover images across your blogs look really bad. Maybe they're too big, or showing too high up or something similar. Well instead of redoing all your images, you could selectively apply a little custom CSS to 'fix' just these. To do that just go to '<em>Settings -&gt; Code Injection</em>' from the editor window for a single post.</p><p>Then you can make the micro-adjustments you need:</p><!--kg-card-begin: code--><pre><code class="language-css">&lt;style&gt;
@media (min-width: 800px) {    
    .post-full-image img {
        height: 500px;
        object-position: 0px -600px;
    }
}
&lt;/style&gt;</code></pre><!--kg-card-end: code--><p>Hopefully that gives you an idea of the kind of things you can do. If you want to make more rigorous changes then either forking the theme or finding another one you like might be better. But for simple quick things like this, it works great!</p><h3 id="purging-unused-images">Purging unused images</h3><p>You might find you have lots of images that are not referenced and just sitting unused, taking up space. Wordpress tends to create different versions of the same one which can lead to a bit of clutter.</p><p>There's a handy open-source tool, <a href="https://github.com/ghostboard/ghost-purge-images">ghost-purge-images</a>, which can help with this. </p><p>If you've setup your <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">Ghost blog in docker</a>, we can get this working without much effort. We'll just exec into our container and add the package, via <code>npm</code>.</p><!--kg-card-begin: code--><pre><code class="language-bash">docker exec -it &lt;yourblogcontainer&gt; bash
npm install -g ghost-purge-images</code></pre><!--kg-card-end: code--><p>Now it's installed, we need some API keys for it to work. For that, just open the Ghost admin panel and go to '<em>Integrations</em>' and add a '<em>Custom Integration</em>'. </p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2020/02/Screenshot-2020-02-02-at-20.46.51.png" class="kg-image" alt="How to migrate a paid Wordpress.com blog to Ghost, completely free!"></figure><!--kg-card-end: image--><p>Click Create, and you'll get a <code>Content API Key</code> and <code>Admin API Key</code>. </p><p>Now back in the blog container shell, run:</p><!--kg-card-begin: code--><pre><code class="language-bash">cd /var/lib/ghost
ghost-purge-images display --content-key=YOUR_CONTENT_KEY --admin-key=YOUR_ADMIN_KEY</code></pre><!--kg-card-end: code--><blockquote>If your setup is different, you basically want to run the command from the directory where your <code>config.development.json</code> or <code>config.production.json</code> file is. Also for this to work, you need to have the <code>url</code> setting in your config file pointing to where Ghost is running. If you're using my Docker setup, you likely don't have that as it is set by environment variables instead. You can add it though with <code>ghost config url http://localhost:2368</code></blockquote><p>If all is well, you'll be given a summary of images that are not in-use. You can then run the same command but changing <code>display</code> to <code>purge</code> to actually remove the files.</p><!--kg-card-begin: code--><pre><code class="language-bash">👇 Unused images that can be removed:
- content/images/2019/12/DSC00011.jpeg (0.08 MB)
- content/images/2019/12/DSC00011_o.jpeg (0.08 MB)
- content/images/2019/12/favicon.ico (0.10 MB)
- content/images/size/w1000/2019/12/DSC00011.jpeg (0.08 MB)
...
...
- content/images/size/w600/2019/12/DSC00011.JPG (0.04 MB)

📊 Summary:
- 18 files of 353 uploaded images (5.10%)
- Total space: 8.15MB

❔ Want to delete this files? Run `ghost-purge-images purge --content_key=YOUR_CONTENT_KEY --admin_key=YOUR_ADMIN_KEY`
🎁 Open source tool by https://ghostboard.io</code></pre><!--kg-card-end: code--><blockquote>Make sure to take a backup of your images first, just in case!</blockquote><p>Okay that's it for this one. We've now migrated our blog over to Ghost and done some customisations, all for free. At this point you could update the DNS records for to point to your shiny new blog.</p><p>But, there's a little more we may still want to do. So, in future posts, we'll look at look at how to migrate the comments over to Disqus and how to get SSL setup with Let's Encrypt. </p><p>Until then have fun customising your new blog over in the Ghost world!</p>]]></content:encoded></item><item><title><![CDATA[How to compress GoPro movies (and keep metadata so that Quik is happy)]]></title><description><![CDATA[Do you have loads of GoPro movies eating up disk space? Looking for a way to compress them, but in such a way that quality is still very high and they continue to play nice with GoPro software, like Quik? I did too.]]></description><link>https://coderunner.io/how-to-compress-gopro-movies-and-keep-metadata/</link><guid isPermaLink="false">5be49fd8fe0e16000118975f</guid><category><![CDATA[ffmpeg]]></category><category><![CDATA[video transcoding]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Mon, 19 Nov 2018 17:19:49 GMT</pubDate><media:content url="https://coderunner.io/content/images/2018/11/holding-gopro.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://coderunner.io/content/images/2018/11/holding-gopro.jpeg" alt="How to compress GoPro movies (and keep metadata so that Quik is happy)"><p>Do you have loads of GoPro movies eating up disk space? Looking for a way to compress them, but in such a way that quality is still very high and they continue to play nice with GoPro software, like Quik? I did too.</p><p>I've owned a GoPro Hero 3+ and now more recently a Hero 5. With the right conditions and settings it takes some really nice video, but sometimes they're just a little bit too space hungry for my liking. </p><p>Whether you are uploading videos to the cloud, streaming them via a home NAS, or just short on drive space, you might find yourself wanting to reduce the size of your video files. </p><p>I've <a href="https://coderunner.io/shrink-videos-with-ffmpeg-and-preserve-metadata/">previously written</a> about how to compress videos generally, but for GoPro clips we need to do a little more. That's because there are additional streams embedded in the videos, as well as proprietary metadata.</p><p>Here I will walkthrough the process of compressing a video shot from my Hero 5, using <a href="https://www.ffmpeg.org/">FFmpeg</a>. If you just want to skip straight to the final command, <a href="#putting-it-all-together">scroll to the end</a>.</p><blockquote>If you have a different model GoPro, some of the metadata might be slightly different (earlier GoPro's do not have a GPS sensor for example). Hopefully there is enough information here so that you can tweak as necessary for your own camera</blockquote><p>Here's our test video:</p><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/gopro-nz-original.png" class="kg-image" alt="How to compress GoPro movies (and keep metadata so that Quik is happy)"><figcaption>Snow capped mountains in NZ, shot at 2.7k, weighing in at 78.9MB for 10s</figcaption></figure><p>This video was shot at 2.7k to capture the amazing landscapes in New Zealand. I'd like to retain a decent quality, but reduce the size from 78.9MB.</p><p>If you haven't already, <a href="https://www.ffmpeg.org/">install FFmpeg</a>. Now based on my <a href="https://coderunner.io/shrink-videos-with-ffmpeg-and-preserve-metadata/">previous post</a>, we could start off by re-encoding just the video at a slightly higher <a href="https://slhck.info/video/2017/02/24/crf-guide.html"><code>crf</code> factor</a>, like this:</p><pre><code class="language-bash">ffmpeg -i GOPR5687.MP4 -c:a copy -c:v h264 -crf 22 output.mp4
</code></pre>
<p>Running this spits out a 30.4MB file, around 61.5% smaller, and the quality is fine for my purposes. Not bad!</p><blockquote>The range of the CRF scale is 0–51, where 0 is lossless, 23 is the default, and 51 is worst quality possible. A lower value generally leads to higher quality, and a subjectively sane range is 17–28. Consider 17 or 18 to be visually lossless or nearly so; it should look the same or nearly the same as the input but it isn't technically lossless. You should experiment with different crf values to find the sweet-spot for you.</blockquote><p>Is that it? Are we done? Not quite.</p><h3 id="keeping-gopro-metadata">Keeping GoPro metadata</h3><p>If we try to import our new smaller video into <a href="https://shop.gopro.com/EMEA/softwareandapp/quik-%7C-desktop/Quik-Desktop.html">Quik</a> so that we can do some basic editing, we hit a roadblock.</p><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/quik-no-files-added.png" class="kg-image" alt="How to compress GoPro movies (and keep metadata so that Quik is happy)"><figcaption>As far as Quik is concerned, the new smaller clip just doesn't cut the mustard</figcaption></figure><p>Hmm, it seems something has happened to the video during the conversion, and now GoPro's software cannot use it. </p><p>Let's investigate.</p><p>We can use FFmpeg with no options specified to sneak a peak at the metadata in the videos without actually doing anything to them, and compare the original with our new, smaller file.</p><p>Here's the original:</p><pre><code class="language-bash">~/Desktop  ffmpeg -i GOPR5687.MP4
ffmpeg version 4.0.2 Copyright (c) 2000-2018 the FFmpeg developers
...
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'GOPR5687.MP4':
  Metadata:
    major_brand     : mp41
    minor_version   : 538120216
    compatible_brands: mp41
    creation_time   : 2017-08-07T14:43:26.000000Z
    location        : -43.7941+170.1170/
    location-eng    : -43.7941+170.1170/
    firmware        : HD5.02.01.57.00
  Duration: 00:00:10.41, start: 0.000000, bitrate: 60633 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, bt709), 2704x1520 [SAR 1:1 DAR 169:95], 60541 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : 	GoPro AVC
      encoder         : GoPro AVC encoder
      timecode        : 14:58:24:55
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : 	GoPro AAC
      timecode        : 14:58:24:55
    Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : 	GoPro TCD
      timecode        : 14:58:24:55
    Stream #0:3(eng): Data: none (gpmd / 0x646D7067), 33 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : 	GoPro MET
    Stream #0:4(eng): Data: none (fdsc / 0x63736466), 14 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : 	GoPro SOS
</code></pre>
<p>And here's our smaller conversion:</p><pre><code class="language-bash">~/Desktop  ffmpeg -i output.mp4
ffmpeg version 4.0.2 Copyright (c) 2000-2018 the FFmpeg developers
  ...
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.12.100
    location-eng    : -43.7941+170.1170/
    location        : -43.7941+170.1170/
  Duration: 00:00:10.41, start: 0.000000, bitrate: 23356 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc), 2704x1520 [SAR 1:1 DAR 169:95], 23252 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
    Metadata:
      handler_name    : VideoHandler
      timecode        : 14:58:24:55
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
    Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s
    Metadata:
      handler_name    : TimeCodeHandler
      timecode        : 14:58:24:55
</code></pre>
<p>Straight away we can see that the output from the original file is considerably longer than the one for our converted file. </p><p>There are two main differences between the two:</p><ol><li>The original file contains 5 separate streams embedded inside it, our transcode just 2</li><li>The original file uses proprietary <code>handler_name</code>'s like <code>GoPro AVC</code>, whereas our transcode uses generic versions, like <code>VideoHandler</code></li></ol><p>What happened is that during our transcoding we essentially stripped out loads of data which the GoPro camera writes into the file, meaning we've lost some information as well as the ability to use our files in GoPro software.</p><p>Not ideal, let's fix that.</p><h3 id="multiple-streams">Multiple Streams</h3><p>As you would expect, a video is generally made up of both an audio and a video part, and each part is referred to as a <code>stream</code>. </p><p>What might not be so intuitive at first though is that a file can, and often does, include more streams than just one audio and one video. For example, we might have several different audio streams for different spoken languages, plus a subtitle stream. A decent video player will allow us to choose which streams we want to use when we play the file, perhaps french audio with english subtitles. </p><p>In the case of our GoPro, we have several additional <em>data</em> streams, one of which contains the GPS data that is recorded if it is switched on (seen above as <code>GoPro MET</code>).</p><p>By default, FFmpeg includes just <a href="https://trac.ffmpeg.org/wiki/Map#Default">one audio and one video stream</a> in the output file, choosing what it considers to be the <em>best</em> of each type.</p><p>To tell FFmpeg to copy all streams we can use <code><a href="https://ffmpeg.org/ffmpeg.html#Stream-handling">-map 0</a></code>. Also, to tell it to copy streams even if it doesn't recognise their content (necessary for some GoPro data streams), we can use <code><a href="https://ffmpeg.org/ffmpeg.html#toc-Advanced-options">-copy_unknown</a></code> also.</p><pre><code class="language-bash">ffmpeg -i GOPR5687.MP4 -map 0 -copy_unknown -map_metadata 0 \
    -c copy -c:v h264 -crf 22 output.mp4
</code></pre>
<p>Since we want to preserve most of the streams without modification, we pass <code>-c copy</code> first, which says to just copy each stream in the input directly to the output. Then, we override the copy codec just for the video stream, using <code>-c:v h264</code> as before.</p><p>I've also added <code><a href="https://ffmpeg.org/ffmpeg.html#toc-Advanced-options">-map_metadata</a> 0</code> to copy all of the global metadata from input to output.</p><p>If we inspect the video again, it's better. We have now 5 streams as we wanted:</p><pre><code class="language-bash"> ~/Desktop  ffmpeg -i output.mp4
ffmpeg version 4.0.2 Copyright (c) 2000-2018 the FFmpeg developers
...
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 2017-08-07T14:43:26.000000Z
    encoder         : Lavf58.12.100
    location-eng    : -43.7941+170.1170/
    location        : -43.7941+170.1170/
  Duration: 00:00:10.41, start: 0.000000, bitrate: 23407 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc), 2704x1520 [SAR 1:1 DAR 169:95], 23252 kb/s, 59.94 fps, 59.94 tbr, 60k tbn, 119.88 tbc (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : VideoHandler
      timecode        : 14:58:24:55
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : SoundHandler
    Stream #0:2(eng): Data: none (tmcd / 0x64636D74), 0 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : TimeCodeHandler
      timecode        : 14:58:24:55
    Stream #0:3(eng): Data: none (gpmd / 0x646D7067), 32 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : GoPro MET
    Stream #0:4(eng): Data: none (stts / 0x73747473), 14 kb/s (default)
    Metadata:
      creation_time   : 2017-08-07T14:43:26.000000Z
      handler_name    : DataHandler
</code></pre>
<p>We still have two issues though!</p><ol><li>The video clip will <em>still</em> not open in Quik... probably has something to do with those generic handler names</li><li>While we were transcoding, ffmpeg spat out a warning for the <code>SOS</code> stream:</li></ol><p><code>[mp4 @ 0x7ff4cc006c00] Unknown hldr_type for fdsc, writing dummy values3.0kbits/s speed=0.148x</code></p><p>Let's continue and try to fix those now.</p><h3 id="correcting-handler-names">Correcting Handler Names </h3><p>Like most things, it is possible to customise the handler name with FFmpeg. We just need to specify the <code>handler</code> metadata tag for each stream and give the correct name.</p><p>For example, <code>-metadata:s:v: handler='  GoPro AVC'</code> sets the <code>handler_name</code> for the video stream to be called <code>GoPro AVC</code>.</p><blockquote>Although the written metadata is called <code>handler_name</code>, the tag to set it is just <code>handler</code>. This has caused confusion for <a href="https://stackoverflow.com/questions/27518432/how-to-set-custom-handler-name-metadata-for-subtitle-stream-using-ffmpeg">at least some others</a></blockquote><p>We can now try to transcode the video, making sure to set the names for the audio and video tracks:</p><pre><code class="language-bash">ffmpeg -i GOPR5687.MP4 -copy_unknown -map_metadata 0 \
-c copy -c:v h264 -crf 22 =\
-map 0 \
-metadata:s:v: handler='  GoPro AVC' \
-metadata:s:a: handler='  GoPro AAC' \
output.mp4
</code></pre>
<p>This time, all is well if we try to import it into Quik.</p><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/quik-file-added-small.png" class="kg-image" alt="How to compress GoPro movies (and keep metadata so that Quik is happy)"><figcaption>Quik now happily imports our video</figcaption></figure><p></p><p>Great, so we can override the handler names to the GoPro specific ones! One slight snag though, the streams are not always in the same order. That means if we set the handler names by index, for some videos we might end up using a name of <code>GoPro AAC</code> for the video, when that's the name for the audio.</p><p>This might not be a problem if you're just encoding a single file, as you can check it first to see what order it has been recorded in. But, if you want to batch convert many with the same command in a script for example, it will bite you. </p><h3 id="selecting-streams-by-name">Selecting streams by name</h3><p>Fortunately, we can work around this by explicitly listing the input streams that we want to map, and then follow that with the handler names we would like for each in the same order. This works because FFmpeg will map the output streams in the same order as you list them from the input. </p><p>We can explicitly list the video and audio streams easily, as there is only one of them. This just requires <code>-map 0:v</code> or <code>-map 0:a</code> respectively, which says to <code>map</code> the <code>v</code>ideo or <code>a</code>udio stream from the <code>0</code>th (first) input file.</p><p>The data tracks are a bit more complicated, as there are 3 of them and they may appear in any order. Luckily, we can also pick the <a href="https://stackoverflow.com/questions/43391266/ffmpeg-specify-stream-by-handler-name">input streams by name</a>:</p><p><code>-map 0:m:handler_name:'  GoPro MET'</code></p><p>Here we specify that we want to map the stream which has a <code>handler_name</code> of <code>GoPro MET</code> in its <code>m</code>etadata, from the <code>0</code>th input file.</p><p>Perfect, now we have all the pieces we need!</p><h3 id="putting-it-all-together">Putting it all together</h3><p>So, to compress the GoPro video named <code>GOPR5687.MP4</code> to a smaller size, but still have it keep it's metadata and work in Quik, we can use something like:</p><pre><code class="language-bash">ffmpeg -i GOPR5687.MP4 -copy_unknown -map_metadata 0 \
-c copy -c:v h264 -crf 22 -pix_fmt yuvj420p \
-map 0:v -map 0:a \
-map 0:m:handler_name:' GoPro TCD' \
-map 0:m:handler_name:' GoPro MET' \
-map 0:m:handler_name:' GoPro SOS' \
-tag:d:1 'gpmd' -tag:d:2 'gpmd' \
-metadata:s:v: handler='        GoPro AVC' \
-metadata:s:a: handler='        GoPro AAC' \
-metadata:s:d:0 handler='       GoPro TCD' \
-metadata:s:d:1 handler='       GoPro MET' \
-metadata:s:d:2 handler='       GoPro SOS (original fdsc stream)' \
output.mp4 \
&amp;&amp; touch -r GOPR5687.MP4 output.mp4
</code></pre>
<p>Here with the <code>-map</code> lines we extract, in order, the video, the audio, and then the <code>TCD</code>, <code>MET</code> and <code>SOS</code> data streams. Then in the <code>-metadata</code> lines we name each stream accordingly.</p><blockquote>Watch out! What looks like a space before the <code>handler_name</code>'s is actually a TAB character. Yes, GoPro really does record handler names starting with a tab. And yes, for some streams like MET, if it doesn't match exactly, Quik won't recognise it. Spent some time tearing my hair out before I realised this...</blockquote><p>Remember the warning about dummy data being used for the <code>SOS</code> fdsc stream I mentioned earlier? </p><p>That happens because FFmpeg doesn't know what an fdsc stream is, so strangely instead of just copying it anyway (which is what we asked), it decides to stuff the whole thing with garbage data. It seems like this <code>SOS</code> stream is only used for file recovery anyway, <a href="https://github.com/gopro/gpmf-parser#gopros-mp4-structure">and isn't that important</a>. </p><p>Nevertheless, I'd still prefer to copy it over if possible. To do this, I've added a small hack which retags the 'fdsc' stream as 'gpmd' using <code>-tag:d:2 'gpmd'</code>. Because FFmpeg is familiar with this type, it will happily copy across the data. Then, when I rename the handler, I've given it a name to indicate that it was originally the fdsc stream.</p><blockquote>If you don't care about keeping this stream, you can omit these tags. Hopefully in the future FFmpeg will just copy all the data if <code>-copy_unknown</code> is specified anyway, so that there is no need for it anyway</blockquote><p>I've explicitly specified the output pixel format as <code>-pix_fmt yuvj420p</code> also, so there is no guesswork on behalf of the codec. </p><blockquote>If you have any issues with colour reproduction, you might also want to look at colour profile settings. See <a href="https://forum.videohelp.com/threads/380610-Change-in-color-while-reencoding-with-ffmpeg">here</a> or <a href="https://forum.shotcut.org/t/bt-709-colorspace-discrepancies/585">here</a> for a little more info.</blockquote><p>Finally, I've added a<code>touch</code> command at the end, in order to copy across the original file modification timestamps to the newly created file. Handy if we're going to be sorting the files by date!</p><p>So now if we run the above command, we generate a much smaller video that retains our metadata, and keeps Quik happy:</p><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/quik-file-added.png" class="kg-image" alt="How to compress GoPro movies (and keep metadata so that Quik is happy)"><figcaption>As far as Quik is concerned, the file now came straight from the GoPro ;)</figcaption></figure><blockquote>In my tests, the GPS guage data is copied over and you can activate the guages within Quik to overlay onto your video. However, the data doesn't seem to match up correctly, despite all being there (and I can <a href="https://community.gopro.com/t5/GoPro-Metadata-Visualization/Extracting-the-metadata-in-a-useful-format/gpm-p/40293">extract the gps coordinates</a> recorded without issue). I suspect there is some timing data that is not copied exactly. There is a <a href="https://www.reddit.com/r/ffmpeg/comments/8qosoj/merging_raw_gpmd_as_metadata_stream/">reddit thread</a> that is relevant to this. There has also been commits to the FFmpeg codebase <a href="http://git.videolan.org/?p=ffmpeg.git;a=commit;f=libavformat/movenc.c;hb=850a45aef10b50a2344a71055a30987aea23e48a">from a GoPro engineer</a> to support the <code>gpmd</code> stream, so perhaps in time this will work correctly.</blockquote><p>That's all there is to it! And if you want to automate this process for a number of videos, I created a tool called <a href="https://github.com/bennetimo/shrinkwrap">Shrinkwrap</a> to do this. If you're familiar with Docker you might want to check it out, using the GoPro5 preset.</p><p>If you have any comments or improvements, feel free to leave them below!</p><p class="attributed-image">Post cover image <a href="https://images.unsplash.com/photo-1484506399805-c273b8e91dce?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=07ef6ff5b6411a6aba95fdb24ebddd49&w=1000&q=80">source</a></p>]]></content:encoded></item><item><title><![CDATA[Video files taking up too much space? Let's shrink them with FFmpeg!]]></title><description><![CDATA[Do you have loads of videos littering your drive from your phone, camera, GoPro etc. taking up loads of space? So did I, so I started looking for a way to reduce the size while keeping the perceived quality the same, and retaining all of the original metadata and timestamps.]]></description><link>https://coderunner.io/shrink-videos-with-ffmpeg-and-preserve-metadata/</link><guid isPermaLink="false">5be2099bfe0e1600011896ef</guid><category><![CDATA[ffmpeg]]></category><category><![CDATA[video transcoding]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Thu, 08 Nov 2018 21:05:00 GMT</pubDate><media:content url="https://coderunner.io/content/images/2018/11/video-camera-man.png" medium="image"/><content:encoded><![CDATA[<img src="https://coderunner.io/content/images/2018/11/video-camera-man.png" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"><p>Do you have loads of videos littering your drive from your phone, camera, GoPro etc. taking up loads of space? So did I, so I started looking for a way to reduce the size while keeping the <em>perceived</em> quality the same, and retaining all of the original metadata and timestamps.</p><p>Storage space might have become a lot cheaper in recent years, but at the same time we're recording more and more high quality video. Also, when you start thinking about backing up all those precious memories to a cloud service then size really starts to matter again. </p><p>It's not just space either, over the years I've owned a range of different devices, all recording video in different formats, codecs and qualities. Some of these are now old and difficult to work with, with no native support in macOS, Plex, etc. </p><p>Here's just some of the digital cruft that's accumulated for me:</p><ul><li>Home movies from an old digital camera (mpeg2 in a .MOD container)</li><li>Old edited movie projects (mpeg2 in a .wmv container)</li><li>Clips from a <em>very</em> old mobile phone (H.263 QCIF in a .3gp container)</li><li>Video from a more modern Android phone (h.264 1080p in a .mp4 container)</li><li>GoPro Hero 3+ footage (h.264 720p in a .mp4 container)</li><li>GoPro Hero 5 footage (h.264 1080p/2.7k in a .mp4 container)</li></ul><p>Perhaps you have something similar if you've used many different devices over the years too!</p><blockquote>A 19s clip from a really old phone weighs in at 198kb. On the other hand, 10 seconds of 2.7k on the GoPro5 puts out 78.2mb. That means my GoPro (not at max quality) eats up around 750x more space per second... we've come a long way!</blockquote><p>Sure, <a href="https://www.videolan.org/vlc/index.en-GB.html">VLC</a> will play pretty much everything you can throw at it, but it's not always convenient. I'd like to be able to stream my media across my devices, whether in or out the house. </p><h2 id="goals">Goals</h2><p>So, I set about organising my video library, with a few goals:</p><ol><li>Transcode all old audio/video files to the standards of 2018 (namely h.264 video, and aac audio)</li><li>Retain as much metadata as possible, in particular creation date and file modification timestamps (so sorting files by date is not messed up, for example)</li><li>Reduce the file size! (But, retain the same <em>perceived</em> quality)</li></ol><blockquote>Transcoding is a lossy operation that re-encodes the entire data stream and repackages it, so there is some loss from the original. However, with the right settings, the difference is almost impossible to notice.</blockquote><h2 id="let-s-get-shrinking">Let's get shrinking</h2><p>If you google <a href="https://www.google.com/search?q=how+to+shrink+video+size">how to reduce video size</a> you'll get a whole range of different results. It's a bit of a minefield, with many blogs and articles set up promoting all sorts of shareware tools all claiming to be your one-stop solution.</p><p>There is no need to cough up any money though, because a very advanced and capable open-source tool exists: <a href="https://handbrake.fr/">HandBrake</a>. This handy app is completely free and supports macOS, Windows, and Linux.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/handbrake.png" class="kg-image" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"></figure><!--kg-card-end: image--><p>HandBrake might look daunting at first, but for most of the options the defaults they've chosen are sensible. If you load a particular preset (iPhone, YouTube etc) then it pretty much just works, and has been able to handle everything I've thrown at it.</p><p>Apart from one snag. Handbrake is not the best at preserving the metadata of the original file. That's a bit of a dealbreaker for me, as I'm often sorting and organising files by date in Finder. If I use HandBrake, then every video I've taken ends up having a modified timestamp of whatever day I run them through it. There is an open <a href="https://github.com/HandBrake/HandBrake/issues/345">request on their github</a> to improve this, but as of now it's <a href="https://github.com/HandBrake/HandBrake/issues/345#issuecomment-407109958">not high up the priority list</a>.</p><p>It's also a little clunky working with files in batch mode. I had years worth of videos I wanted to run through it, so anything that can't be easily automated is not ideal. </p><blockquote>There is also <a href="https://osomac.com/apps/osx/handbrake-batch/">HandBrakeBatch</a>, a small wrapper around HandBrake that was made before there was any built-in batch support. It's a very simple tool, but does manage to preserve the timestamps. However, it is no longer maintained and hasn't been updated since 2013.</blockquote><p>HandBrake is a very useful tool and if you don't care about preserving all the metadata then you might find it does everything you need, so give it a try.</p><p>As preserving the metadata was important to me, I needed something else.</p><p><strong>Enter FFmpeg</strong></p><p><a href="https://www.ffmpeg.org/">FFmpeg</a> is <em>"a complete, cross-platform solution to record, convert and stream audio and video." </em>It's a very advanced and powerful tool that can do much more than simple video-transcoding. You can pretty much do <a href="https://www.ffmpeg.org/ffmpeg.html">anything you can think of</a> to a video.</p><p>Let's say we have an old home video shot with a camcorder that was saved as a .MOD and we want to convert it to something more modern.</p><p>First, we need to install <a href="https://www.ffmpeg.org/download.html">FFmpeg</a>. Next, we just open up a terminal window (or cmd prompt on Windows), and fire off:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ffmpeg -i input.MOD output.mp4
</code></pre>
<!--kg-card-end: markdown--><p>That's it. FFmpeg recognises the file extensions and uses suitable codecs and defaults for each, so in this case it will take our old <code>input.MOD</code> file and transcode it to <code>output.mp4</code>, which will be h.264 inside an .mp4 container. </p><blockquote>You can also explicitly choose the video and audio codecs to use with <code>-codec:v</code> and <code>-codec:a</code> respectively</blockquote><p>What about file size?</p><p>In my simple test I took an old video clip from a JVC Everio Camcorder, shot in 2010. </p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2018/11/transcode-test-mod-flowers.png" class="kg-image" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"><figcaption>Flowers... and a bee in there somewhere</figcaption></figure><!--kg-card-end: image--><p>The original file was 27.5mb, for a 24 second clip. The transcoded file is 3.4mb, a reduction of ~87%!</p><p>Surely we're going to be getting a terrible conversion to be able to make it that small?! Actually no, to my eyes the two are, for all intents and purposes, the same.</p><h3 id="quality">Quality</h3><p>What makes a video seem to have high <em>quality</em>? </p><p>One of the key factors that influences this is the <em>bitrate</em>, how many <em>bits</em> of information are used for encoding each second of video. If we have <em>more</em> bits, we can encode more information. Similarly, if we have <em>less</em> bits available to use, then we have to be selective in deciding what information to keep, and what we have to throw away.</p><p>There is always a trade off between quality and file-size. Generally speaking, higher quality uses more space. </p><blockquote>At the extreme end, a single minute of UHD 4k footage might take up <a href="https://www.4kshooters.net/2014/06/25/how-much-hard-disk-space-do-you-need-shooting-4k/">over 5GB</a> of disk space. </blockquote><p>The job of a codec is to stuff as much information as possible about your video into the smallest package it can. Overtime, codecs improve and better ways of compressing video are designed that might be able to achieve both higher quality <em>as well as</em> lower file-size.</p><blockquote>h.265 is the successor to h.264 which boasts even more impressive compression, sometimes giving as much as a <a href="https://www.boxcast.com/blog/hevc-h.265-vs.-h.264-avc-whats-the-difference">50% reduction in file-size</a>. The cost is limited support, and the need for fast, modern hardware to make use of it. The tradeoff wasn't worth it for me right now, but given time it will likely take over.</blockquote><p>The amount of bits we might want to use is also not necessarily the same throughout all points of our video. For example, if the camera is positioned steady and not much is changing,  then there's not as much to encode and we might get away with using less bits. </p><p>On the other hand if we have lots of changes frame to frame, we're going to need more bits to encode it all. But to make it even more interesting, when things are moving the human eye cannot perceive as much detail as when they're static, so for fast-motion content we might also get away with less bits.</p><p>Fortunately we don't really have to worry about all this, we can just use the <code>crf</code> factor <a href="https://trac.ffmpeg.org/wiki/Encode/H.264#crf">setting</a> (<a href="https://slhck.info/video/2017/02/24/crf-guide.html">Constant Rate Factor</a>) from the h.264 codec. And in fact, we've already used it without knowing.</p><p>The <code>crf</code> factor basically translates as "try to keep this quality overall", and will use more or less bits at different parts of the video, depending on the content. (the bitrate is <em>variable</em>). </p><p>As best described by the docs:</p><blockquote>The range of the CRF scale is 0–51, where 0 is lossless, 23 is the default, and 51 is worst quality possible. A lower value generally leads to higher quality, and a subjectively sane range is 17–28. Consider 17 or 18 to be visually lossless or nearly so; it should look the same or nearly the same as the input but it isn't technically lossless.</blockquote><p>The range is exponential, so increasing the CRF value +6 results in roughly half the bitrate / file size, and vice-versa.</p><p>Let's try this out:</p><p>Here's another video shot while skiing, on an old Android phone in 2012.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2018/11/transcode-test-skiing.png" class="kg-image" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"><figcaption>This video was 720p, taking up 93.9mb for 1:32 of footage</figcaption></figure><!--kg-card-end: image--><p>Let's see what happens if we go crazy and try using a value of 51 for the <code>crf</code>:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ffmpeg -i VID_20120116_121220.mp4 -crf 51 output.mp4
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2018/11/transcode-test-skiing-51.png" class="kg-image" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"><figcaption>Now only 2.3mb... that's 97.5% smaller! Possibly went a bit too far though...</figcaption></figure><!--kg-card-end: image--><p>As you can see, although we achieved a drastic reduction in file-size, we had to throw away a huge amount of detail to get there; the video is terribly blocky.</p><p>So, the trick with the <code>crf</code> is to experiment with different values for <em>your own videos</em>. Depending on how much you can see the difference, how you're going to watch them, what the original source was and so on, you might choose different values to me.</p><blockquote>For my phone videos, from a Nexus 5, I'll typically get a space saving of 60%+ using a <code>crf</code> of 22 (where the difference is not noticeable to me). If the scene is mostly black, for example a video of fireworks or lightning storms, I've seen it be nearer 95%. My guess is that older hardware isn't able to do as good an on-the-fly encoding because of resource limitations, so the space saving can be great for these.</blockquote><p>Starting with the default (23) makes sense, moving nearer 18 if you value quality more, and towards 26-28 if you value space savings more. </p><h3 id="transcoding-speed">Transcoding speed</h3><p>This is all well and good, but how long does it take to run? After all if we have hundreds of files to transcode, we don't want to leave our poor laptop working for weeks!</p><p>FFmpeg has a number of <a href="https://trac.ffmpeg.org/wiki/Encode/H.264#a2.Chooseapresetandtune">speed presets</a>, which change how quickly the transcode will run. The default is 'medium', but you can choose from 'ultrafast' to 'veryslow'.</p><p>A slower preset will take longer to run (sometimes <em>significantly</em> longer), and put more demands on your hardware, but it <em>might</em> be able to do a better job and hit the same quality with a smaller file-size.</p><p>How come? Imagine you're packing your car boot to go on holiday and you're in a hurry. You're standing at the car and your partner brings you each suitcase, bag or box to put in one by one. As soon as you take each item to pack, you find a space in the car and put it in. At some point the boot fills up, so you start putting things on the seats and on the roof-rack. All packed!</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2018/11/packing-car-clipart.png" class="kg-image" alt="Video files taking up too much space? Let's shrink them with FFmpeg!"><figcaption>Image <a href="http://laoblogger.com/packing-car-clipart.html#gal_post_146586_car-ride-clipart-9.jpg">Source</a></figcaption></figure><!--kg-card-end: image--><p>Now imagine that you have to pack the same car with the same items, but you're in no hurry this time. So you lay out all the items on the ground and take your time thinking through what best fits where. Sometimes you'll take something out and rearrange it if you find something else later that better fits the space. You might be able to pack all the same items into the same car, but fit everything into just the boot leaving the seats and roof free.</p><p>This is sort of how better compression can work. With more time, the codec can try different things, go over things multiple times, and generally just make a better choice as to what to put where.</p><p>I find that <code>medium</code> and <code>fast</code> are good sweet-spots for me on my laptop:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ffmpeg -i VID_20120116_121220.mp4 -crf 22 -preset fast output.mp4
</code></pre>
<!--kg-card-end: markdown--><h2 id="preserving-metadata">Preserving metadata</h2><p>We've now managed to compress our videos down to a much smaller size and retain enough quality that we can't tell the different. Great!</p><p>Only, so far we've lost most of the metadata doing so. </p><p>Not so great.</p><p>The original skiing clip has, among other metadata, all of the correct timestamps:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">File Modification Date/Time     : 2012:01:16 11:13:54+00:00
File Access Date/Time           : 2018:11:08 16:34:25+00:00
File Inode Change Date/Time     : 2018:11:07 22:20:37+00:00
Create Date                     : 2012:01:16 11:13:54
Modify Date                     : 2012:01:16 11:13:54
Track Create Date               : 2012:01:16 11:13:54
Track Modify Date               : 2012:01:16 11:13:54
Media Create Date               : 2012:01:16 11:13:54
Media Modify Date               : 2012:01:16 11:13:54
</code></pre>
<!--kg-card-end: markdown--><p>In contrast, our new smaller one has lost it all:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">File Modification Date/Time     : 2018:11:08 15:16:28+00:00
File Access Date/Time           : 2018:11:08 16:34:01+00:00
File Inode Change Date/Time     : 2018:11:08 15:16:28+00:00
Create Date                     : 0000:00:00 00:00:00
Modify Date                     : 0000:00:00 00:00:00
Track Create Date               : 0000:00:00 00:00:00
Track Modify Date               : 0000:00:00 00:00:00
Media Create Date               : 0000:00:00 00:00:00
Media Modify Date               : 0000:00:00 00:00:00
</code></pre>
<!--kg-card-end: markdown--><p>By default, FFmpeg won't preserve the metadata from the original streams, but we can tell it to with the <a href="https://ffmpeg.org/ffmpeg.html#Advanced-options"><code>-map_metadata</code> option</a>:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ffmpeg -i VID_20120116_121220.mp4 -crf 22 -map_metadata 0 \ 
    -preset fast output.mp4
</code></pre>
<!--kg-card-end: markdown--><p>This will copy all the metadata from the first input file (numbered starting from zero) to the output. </p><blockquote>We only have a single input, but it's possible to have more when you're combining videos, overlays etc</blockquote><p>Let's look at the metadata again now:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">File Modification Date/Time     : 2018:11:08 16:48:34+00:00
File Access Date/Time           : 2018:11:08 16:48:35+00:00
File Inode Change Date/Time     : 2018:11:08 16:48:34+00:00
Create Date                     : 2012:01:16 11:13:54
Modify Date                     : 2012:01:16 11:13:54
Track Create Date               : 2012:01:16 11:13:54
Track Modify Date               : 2012:01:16 11:13:54
Media Create Date               : 2012:01:16 11:13:54
Media Modify Date               : 2012:01:16 11:13:54
</code></pre>
<!--kg-card-end: markdown--><p>Better, but we still don't have the file modification time set correctly.</p><p>Let's fix that now!</p><h2 id="recovering-file-modification-timestamps">Recovering file modification timestamps</h2><p>FFmpeg isn't able to copy the file modification timestamp because it is not part of the metadata <em>inside</em> the file, it is metadata <em>of the actual file itself</em> as written by the OS. </p><p>Instead, we can use <a href="https://www.sno.phy.queensu.ca/~phil/exiftool/">exiftool</a> by Phil Harvey for this. This is a very powerful exif/metadata tool that is primarily used for photos, but has some support for videos too.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">exiftool -tagsFromFile VID_20120116_121220.mp4 -extractEmbedded \ 
   -all:all -FileModifyDate -overwrite_original output.mp4
</code></pre>
<!--kg-card-end: markdown--><p>This will extract all the metadata (<code>-all:all</code>) from the original file, and copy it to <code>output.mp4</code>. In particular, we make sure to include the <code>-FileModifyDate</code> from the outside also.</p><blockquote>You could also use <code>touch</code>, like this <code>touch -r VID_20120116_121220.mp4 output.mp4</code> to copy across the modification date. I'm using exiftool though, just in case there is any other metadata that FFmpeg misses out</blockquote><p>Now, we have restored the correct file modification time:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">File Modification Date/Time     : 2012:01:16 11:13:54+00:00
</code></pre>
<!--kg-card-end: markdown--><p>Awesome! </p><p>Only, it's a bit manual and tedious to do this for our entire video collection...</p><h2 id="automating-it-all-with-shrinkwrap">Automating it all with Shrinkwrap</h2><p>For each video, our flow is the same:</p><ol><li>Use FFmpeg to transcode and compress the video</li><li>Recover the file-level metadata with Exiftool</li></ol><p>It would be nice to have a tool that bundles up everything we've seen into one simple package, wouldn't it?</p><p>For this reason, I created <a href="https://github.com/bennetimo/shrinkwrap">Shrinkwrap</a>.</p><p>Shrinkwrap takes as input one or more video files (or directories), <em>shrinks</em> them all, and then <em>wraps</em> them back up with the original metadata. The end result should be videos that are smaller, all the same type, and as close to the originals as possible. Basically, what our original goals were!</p><p>To use Shrinkwrap, you just need to <a href="https://www.docker.com/get-started">install Docker</a>, and then run something like:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">docker run -v /path/to/your/videos:/vids bennetimo/shrinkwrap \
    --input-extension MOD --ffmpeg-opts crf=22,preset=fast /vids
</code></pre>
<!--kg-card-end: markdown--><p>The key parts of this command are:</p><ul><li><code>/path/to/your/videos/</code> is where the videos are that you want to convert</li><li><code>--input-extension</code> is the type of videos you want to process, here <code>.MOD</code></li><li><code>--ffmpeg-opts</code> is any arbitrary FFmpeg options you want to use to customise the transcode</li></ul><p>That's it, just let it run.</p><p>By default, each video will be shrunk into a new file of the same name with the suffix <code>-tc.mp4</code>, so that you can distinguish it from the originals. It will convert all video to h.264, and all audio to aac. </p><blockquote>The originals are not modified or touched, so you can try out different options and then only when happy, delete the original if you want</blockquote><p>Shrinkwrap will use a slightly more advanced FFmpeg command, a bit more like this:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">ffmpeg -i &quot;input.mp4&quot; -copy_unknown -map_metadata 0 -map 0 -codec copy \
    -codec:v libx264 -pix_fmt yuv420p -crf 23 \
    -codec:a libfdk_aac -vbr 4 \
    -preset fast &quot;output.mp4&quot;
</code></pre>
<!--kg-card-end: markdown--><p>Woah! Quite a lot going on there!</p><p>This translates as "<em>hey FFmpeg, take my video <code>input.mp4</code> and transcode it, making sure to <code>copy_unknown</code> streams, <code>map</code> all the streams you find, and <code>map_metadata</code> from my input file. For any video, convert it using <code>libx264</code>, with a <code>pix_fmt</code> of <code>yuv420p</code> and a <code>crf</code> quality of 23. For audio, I want it as aac using the <code>libfdk_aac</code> codec using a <code><a href="https://trac.ffmpeg.org/wiki/Encode/AAC#fdk_vbr">vbr</a></code> of 4. Finally for any other streams (e.g. data), just <code>copy</code> them as is, and do the whole thing </em><code>fast</code>!"</p><blockquote>Here you can really start to see the power of FFmpeg. You might want to specify additional filters too with <code>-vf</code>. e.g. if your video is interlaced, you can use <code>-vf=yadif</code> to de-interlace it</blockquote><p>For more customisation, you can check the <a href="https://github.com/bennetimo/shrinkwrap">readme</a>. There are also a couple of <a href="https://github.com/bennetimo/shrinkwrap#presets">Shrinkwrap presets</a> that do a few extra things, specifically for GoPro footage, that you might want to check out if that applies to you.</p><blockquote>I've also written a <a href="https://coderunner.io/how-to-compress-gopro-movies-and-keep-metadata-so-that-quik-is-happy/">separate post</a> specifically for compressing GoPro video files</blockquote><p>Now we have everything we need to shrink our ever growing collections and keep them maintainable!</p><p>Shrinkwrap is working for my needs right now, but if you have any comments or suggestions, be sure to leave them below!</p><p>Have fun saving space :)</p><!--kg-card-begin: html--><p class="attributed-image">Post cover image <a href="https://es.kisspng.com/kisspng-ao2b10/">source</a></p><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Hello, Blog! - An advanced setup of Ghost and Docker made simple (Updated 2018)]]></title><description><![CDATA[Let's set up a Ghost 2.x blog using Docker and Docker Compose, fronted by an nginx reverse proxy. We'll add simple backups, and make it easy to sync a local blog with a live version on the Internet.]]></description><link>https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/</link><guid isPermaLink="false">5bd9859967872600012a0764</guid><category><![CDATA[docker]]></category><category><![CDATA[ghost]]></category><category><![CDATA[docker-compose]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Thu, 01 Nov 2018 14:42:00 GMT</pubDate><media:content url="https://coderunner.io/content/images/2018/11/ghost-docker.png" medium="image"/><content:encoded><![CDATA[<img src="https://coderunner.io/content/images/2018/11/ghost-docker.png" alt="Hello, Blog! - An advanced setup of Ghost and Docker made simple (Updated 2018)"><p>Back in 2015 I wrote a <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">series of blog posts</a> describing a way to setup a <a href="https://ghost.org/">Ghost</a> blog using docker and docker-compose. </p><p>Since then a fair amount has changed (technology moves fast!). Ghost has updated from 0.7.x releases through 1.x and is now sitting at 2.x, with a number of changes along the way, most of which is not backwards compatible. Configuration has switched to <a href="https://blog.ghost.org/nconf/">nconf</a>, the main content path has changed, the database is completely updated, and the theme api has had extensive rework. Of course, the changes bring in a whole load of new features, including a much improved editor and support for richer content.</p><p>In the docker world a lot has changed too. Links have been <a href="https://docs.docker.com/network/links/">officially deprecated</a>, compose file <a href="https://docs.docker.com/compose/compose-file/compose-versioning/#versioning">version 1 is deprecated</a>, data-only containers have been deprecated in favour of <a href="https://docs.docker.com/storage/volumes/">docker volumes</a>, and volume support has made its way into compose. </p><p>So, it's time to update our blog stack to work with Ghost 2.x and bring everything up to date for 2018.</p><p>As before I'll walk through how I've set everything up piece by piece, so you can follow along with your own setup, just replace all references to <code>coderunner.io</code> with your own domain :)  If anything is not clear, or you have any other thoughts (or improvements!), drop me a comment below.</p><p>I have split this up into four parts:</p><ul><li>Part 1: Setting up a Dockerised installation of Ghost with MariaDB</li><li>Part 2: Deploying Ghost 2.x on DigitalOcean with Docker Compose (coming soon)</li><li>Part 3: Backing up a Dockerised Ghost blog using ghost-backup (coming soon)</li><li>Part 4: Syncing a local and remote Dockerised Ghost blog (coming soon)</li></ul><h2 id="the-goal"><strong>The Goal</strong></h2><p>What we're shooting for:</p><ul><li>Ability to bring up/down the whole stack with a single command (we'll use <a href="https://docs.docker.com/compose/">Docker Compose</a> for that)</li><li>Let us create content and write our posts on a local environment (e.g. laptop) before syncing it easily with a live host once we're ready</li><li>Front our blog with a <a href="https://en.wikipedia.org/wiki/Reverse_proxy">reverse proxy</a>, because we will be hosting it on a VPS and may want to have other blogs/apps on the same box</li><li>Easy and automated backups of our blog, to our local machine or a cloud storage service like Dropbox.</li><li>Stay as close as possible to <a href="https://docs.ghost.org/concepts/hosting/">Ghosts recommended stack</a>, and Docker best practices</li></ul><p>Ready? Let's get started! </p><h2 id="overview">Overview</h2><p>In this first post we will setup Ghost from their official docker image, backed by a MariaDB container, and fronted by <a href="https://www.nginx.com/resources/wiki/">Nginx</a>. To wire it all together, we'll use <a href="https://docs.docker.com/compose/">Docker Compose</a>.</p><blockquote>Before getting started, you should have <a href="https://docs.docker.com/engine/installation/">Docker</a> and <a href="https://docs.docker.com/compose/install/#prerequisites">Docker Compose</a> installed. If you're on Mac or Windows, then they both come bundled together</blockquote><h3 id="why-docker-why-mariadb">Why docker? Why MariaDB?</h3><p></p><p>Ghost can be setup with either <a href="https://www.sqlite.org/">sqlite3</a> or <a href="https://www.mysql.com/">Mysql</a>/<a href="https://mariadb.org/">MariaDB</a>.</p><p>We want to have our local environment mimic live as closely as possible, so that we can easily sync between the two. Because of this we will avoid sqlite3 (which is only recommended for development), and back our blog using a fully featured DB in both environments. This is one of the benefits of using Docker, that we can easily package up our entire stack so it runs the same in both places. No more 'well it worked on my local machine!' problems. </p><p>I've chosen to use MariaDB, but you can use Mysql also if you prefer, just change the docker image. For our purposes, you should be able to drop in one as a replacement for the other.</p><h2 id="directory-structure">Directory Structure</h2><p>So that it is clear up-front, this is the directory structure we'll be putting together:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">.
├── config.base.json
├── docker-compose.yml
└── env
    └── coderunner.dev.env

1 directory, 3 files
</code></pre>
<!--kg-card-end: markdown--><p>We'll build up each one as we go, but you can browse all the code for this part <a href="https://github.com/bennetimo/hello-blog/tree/part1">on github</a>  if you want to check something, or to refer back to the complete solution anytime.</p><blockquote>You can checkout all the files for this part with <code>git clone -b part1 git@github.com:bennetimo/hello-blog.git</code></blockquote><h2 id="creating-a-docker-volume">Creating a Docker Volume</h2><p>First things first, we need a place to store the great blog content we'll be creating! </p><p>We could store everything directly on the host and then<a href="https://docs.docker.com/storage/bind-mounts/"> bind mount</a> the volume into the container, but this makes everything less portable and very host-specific; we would have to worry about paths, and making sure they're correct on whichever host we'll be running on. </p><p>In the previous version of this post I used a <a href="http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/">data only container</a> to get round this. That would still work, but since then Docker Volumes have come a long way as well as now being fully supported by Compose. So, this is now the <a href="https://docs.docker.com/storage/volumes/">preferred method</a> of storing data within docker.</p><p>So let's kick things off by creating a file, call it <code>docker-compose.yml</code> and put it in a folder on your machine, let's call it <code>hello-blog</code>.</p><p>In this file we'll be declaratively listing all of the components that make up our stack, and how they <em>compose</em> together. </p><p>Here is the first version of our file:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">version: &quot;3.7&quot;

# Data volumes containing all the persistent storage for the blog
volumes:
 data-ghost:
  name: data-ghost
 data-db:
  name: data-db
</code></pre>
<!--kg-card-end: markdown--><p>This is a very simple docker-compose file that just declares two volumes, which will then be created for us if they don't exist. We have one that will hold the MariaDB database, and the other that will hold all of the ghost content. </p><blockquote>We're using Compose file version 3.7, which at the time of writing is the latest. Anything without a version number is considered version 1 and legacy, see <a href="https://docs.docker.com/compose/compose-file/">here</a> for more info.</blockquote><p>Now that we've got a volume to store our content, we can configure the database to use it.</p><h2 id="setup-our-database">Setup our database</h2><p>There's an officially supported <a href="https://hub.docker.com/_/mariadb/">image</a> for MariaDB which makes our lives easy. </p><p>All we need to do is add it to our <code>docker-compose.yml</code>, as a new section below the version line:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">services:
 # Database container
 mysql:
  image: mariadb:10.3
  container_name: &quot;db&quot;
  restart: always
  env_file: env/coderunner.dev.env
  expose:
   - &quot;3306&quot;
  volumes:
   - &quot;data-db:/var/lib/mysql&quot;
</code></pre>
<!--kg-card-end: markdown--><p>There's a few things going on here, so let's go through it. </p><p>We're creating a database container using the <code>mariadb:10.3</code> image that is available on <a href="https://hub.docker.com/">Docker Hub</a>. It exposes the default <code>3306</code> port so that other containers can talk to it, and is set to <code>restart</code> automatically if it should ever die.</p><p>The <code>container_name</code> isn't required, but we've added it to override the default name that would otherwise be generated, to just be the simpler <code>db</code>.</p><blockquote>The <code>container_name</code> is like the external name that we'll see when we interact with our container using the docker cli. The service name is different, here we've called it <code>mysql</code>, and is the internal name used within the docker network. This name is important, and we'll see why later.</blockquote><p>The line <code>"data-db:/var/lib/mysql"</code> tells docker that we would like the <code>/var/lib/mysql</code> directory within the container to actually be stored inside our <code>data-db</code>volume. </p><blockquote>Any other directories that the MariaDB container uses will still only exist within that container</blockquote><p>And finally, we also specified an <code>env_file</code> with our db configuration:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">## MariaDB configuration
MYSQL_ROOT_PASSWORD=&lt;YOURDBROOTPASSWORD&gt;
MYSQL_USER=&lt;YOURDBUSER&gt;
MYSQL_PASSWORD=&lt;YOURDBPASSWORD&gt;
MYSQL_DATABASE=ghost
</code></pre>
<!--kg-card-end: markdown--><p>Fill in the blanks for your own blog setup.</p><h2 id="setup-ghost">Setup Ghost</h2><p>Next up we need to actually add Ghost, and we have an <a href="https://hub.docker.com/_/ghost/">official image</a> for that too, awesome!</p><!--kg-card-begin: markdown--><pre><code class="language-yaml"> # Ghost containers
 blog:
  image: ghost:2.2
  container_name: &quot;blog&quot;
  restart: always
  env_file: env/coderunner.dev.env
  volumes:
   - &quot;data-ghost:/var/lib/ghost/content&quot;
   - &quot;./config.base.json:/var/lib/ghost/config.development.json:ro&quot;
   - &quot;./config.base.json:/var/lib/ghost/config.production.json:ro&quot;
</code></pre>
<!--kg-card-end: markdown--><p>As before, we want all user content to live inside the volume we created, so we tell docker to store the ghost content directory that lives at <code>/var/lib/ghost/content</code> inside our <code>data-ghost</code> volume.</p><p>The only other thing new here is a couple of lines for setting up our Ghost config files.</p><p>Since Ghost 1.0, all config is handled via <a href="https://blog.ghost.org/nconf/">nconf</a>. This means we can use a <code>config.&lt;env&gt;.json</code> file to configure what settings we need for each environment, and Ghost will load the correct file (matching the Ghost environment) automatically if it's located in the correct place.</p><p>The nice thing about using nconf is that every setting can also be specified as an environment variable, which if set will override any values from the config file. </p><p>So we can have a base config file with any common settings, and override any environment specific settings with environment variables.</p><p>We mount the base file <code>config.base.json</code> as both the <code>development</code> and <code>production</code> config files. <a href="https://github.com/bennetimo/hello-blog/blob/part1/config.base.json">Here is the content</a> of that file.</p><p>And <a href="https://docs.ghost.org/concepts/config/#custom-configuration-files"><a href="https://blog.ghost.org/nconf/">t</a></a>hen in our <code>env/coderunner.dev.env</code> file we add the dev specific settings:</p><!--kg-card-begin: markdown--><pre><code class="language-bash"># Ghost configuration
url=http://coderunner.io.develop
database__connection__user=&lt;YOURDBUSER&gt;
database__connection__password=&lt;YOURDBPASSWORD&gt;
NODE_ENV=development
</code></pre>
<!--kg-card-end: markdown--><p>The <code>url</code> value is a Ghost config setting to set the url of our blog. And the <code>NODE_ENV</code> sets the environment that Ghost will start in. For the database details, just make sure they match what you set earlier for MariaDB.</p><h3 id="how-does-the-ghost-container-talk-to-the-database-container">How does the Ghost container talk to the database container?</h3><p>In older versions of Docker we would use <a href="https://docs.docker.com/network/links/">container links</a> to network our ghost and database containers together so that they could talk to each other. This had the side effect of making all environment variables defined in one container available to any container it was linked with. While this meant some setup boiler plate was reduced, it had a number of issues and has now been deprecated. </p><p>Instead, all containers are now connected to the same network by default. This means that our ghost container can automatically talk to the MariaDB container using its service name <code>mysql</code>, without us having to do anything extra. If we wanted to use a different name, or have multiple hostnames then we could use <a href="https://docs.docker.com/compose/compose-file/#aliases">network aliases</a>.</p><p>Our ghost config file has the database host set to <code>mysql</code>, which is the same as the service name, so nothing more is required for the two to communicate.</p><p>At this point we <em>could</em> fire up our blog, but we wouldn't be able to access it from our local machine as we're not exposing the ghost ports. We will go one better than exposing the Ghost port directly, and setup <a href="https://www.nginx.com/resources/wiki/">nginx</a>.</p><h2 id="put-it-all-behind-nginx"><strong>Put it all behind nginx</strong></h2><p>By setting everything up behind an nginx reverse proxy, we can have multiple services (applications, other blogs etc) running on a single box and have nginx handling traffic routing between them. We could set this up manually, but there is already an awesome out-the-box Docker setup in <a href="https://hub.docker.com/r/jwilder/nginx-proxy/">jwilder/nginx-proxy</a>.</p><p>Now we're really starting to see the magic and power of Docker. We're building our application by sticking together components like lego bricks! If we need a new piece, we first check <a href="https://hub.docker.com/">Docker Hub</a> to see if a suitable one already exists that we can use.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://coderunner.io/content/images/2018/11/8505316460_78d0abaf5b_b.jpg" class="kg-image" alt="Hello, Blog! - An advanced setup of Ghost and Docker made simple (Updated 2018)"><figcaption><a href="https://www.flickr.com/photos/elpadawan/8505316460/">Photo</a> by elPadawan / <a href="http://creativecommons.org/licenses/by/2.0/">CC BY</a></figcaption></figure><!--kg-card-end: image--><p>Let's add nginx-proxy to our <code>docker-compose.yml</code>:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml"># Reverse Proxy
 nginx-proxy:
  image: jwilder/nginx-proxy:0.7.0
  container_name: &quot;nginx-proxy&quot;
  restart: always
  ports:
   - &quot;80:80&quot;
  volumes:
   - /var/run/docker.sock:/tmp/docker.sock:ro
</code></pre>
<!--kg-card-end: markdown--><p>And that's all we need to create a fully-fledged reverse proxy! Now we just need to tell it the hostname that will map to our blog, by adding a single environment variable to the blog container:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">environment:
  - VIRTUAL_HOST=coderunner.io.develop
</code></pre>
<!--kg-card-end: markdown--><p>This simple environment variable is all we need to tell nginx to route any traffic destined for the url <code>coderunner.io.develop</code> on port <code>80</code> to be handled by our ghost container.</p><p>Great!</p><blockquote><a href="https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/">Chrome and Firefox now redirect all </a><code>.dev</code><a href="https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/"> traffic to https</a>, which is why I now use <code>.develop</code> here instead. Otherwise you'd have to mess around setting up SSL certificates etc on your local machine</blockquote><h2 id="start-it-up-"><strong>Start it up!</strong></h2><p>In the main blog directory:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">docker-compose up
</code></pre>
<!--kg-card-end: markdown--><p>And we're running!</p><blockquote>On the very first launch the Ghost container might try to connect to MariaDB before it's finished setting up the database. To avoid it you can start MariaDB separately first with <code>docker-compose up -d mysql</code>, or by using my <a href="https://hub.docker.com/r/bennetimo/ghost-wait-mysql/">modified image</a>. See <a href="https://github.com/docker/compose/issues/374">here</a> for more info.</blockquote><p>We just need to add this mapping to our hosts file:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">127.0.0.1		coderunner.io.develop
</code></pre>
<!--kg-card-end: markdown--><p>So that our local machine knows to route the traffic to our local nginx server.</p><blockquote>Or you could use a hosts file manager like <a href="https://github.com/2ndalpha/gasmask">Gas Mask</a> for macOS</blockquote><p>Now we can fire up a browser and visit <a href="http://coderunner.io.develop">http://coderunner.io.develop</a>, and we're greeted with Ghost:</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://coderunner.io/content/images/2018/11/ghost-initial-install.png" class="kg-image" alt="Hello, Blog! - An advanced setup of Ghost and Docker made simple (Updated 2018)"></figure><!--kg-card-end: image--><p>We now have a Ghost blog running, linked to a MariaDB container, and fronted by an Nginx reverse proxy, all running in Docker containers. Nice!</p><p>Next up, we need to set it up running live on the Internet so, you know, people can actually read it.</p>]]></content:encoded></item><item><title><![CDATA[Syncing a Dockerised Ghost blog to DigitalOcean with automated backups]]></title><description><![CDATA[We now have a local and remote Ghost environment ready, but we're missing something- a way to keep them in sync; It's time to add the final piece!]]></description><link>https://coderunner.io/syncing-a-dockerised-ghost-blog-to-digital-ocean-with-automated-backups/</link><guid isPermaLink="false">5bc4a358dc6f5d00018f800d</guid><category><![CDATA[docker]]></category><category><![CDATA[ghost]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[digitalocean]]></category><category><![CDATA[backup]]></category><category><![CDATA[sync]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Sun, 17 Jan 2016 18:43:00 GMT</pubDate><content:encoded><![CDATA[<p><em>Update: This blog series has been updated for Ghost 2.x. If you've landed here looking to setup a new Ghost blog, you should follow the <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">updated version</a>.</em></p><p>In <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">Part 1</a> we setup a <a href="https://ghost.org/">Ghost</a> blog running locally in <a href="https://www.docker.com/">Docker</a> containers, wired together with <a href="https://docs.docker.com/compose/">Docker Compose</a>. In <a href="https://coderunner.io/deploying-ghost-on-digital-ocean-with-docker-compose/">Part 2</a> we deployed it to a Droplet (<a href="https://en.wikipedia.org/wiki/Virtual_private_server">VPS</a>) on <a href="https://www.digitalocean.com/">DigitalOcean</a>.</p>
<p>Along the way we've seen how building up our stack using Docker is a little like playing with lego. We join together a bunch of useful, single-purpose, bricks to make something bigger.</p>
<p>We now have a local and remote Ghost environment ready, but we're missing something- a way to keep them in sync; It's time to add the final piece in our lego pie!</p>
<div class="attributed-image">
<img src="https://coderunner.io/content/images/2015/12/16601966117_44d37aabb1_z.jpg" alt="Cherry Pie for Pi Day 2015 with Slice by Bill Ward" s brickpile, on flickr -fullwidth'>
<a href="https://www.flickr.com/photos/billward/16601966117/">Photo</a>
 by billward / <a href="http://creativecommons.org/licenses/by/2.0/">CC BY</a>
</div>
<p>While we're at it, we'll also setup automated backups to Dropbox. It probably won't help us in the event of a <a href="http://www.extremetech.com/extreme/186805-the-solar-storm-of-2012-that-almost-sent-us-back-to-a-post-apocalyptic-stone-age">giant solar storm</a>, but in less extreme scenarios might stop us from losing all those posts, gifs and memes!</p>
<p>This is the 3rd and final part of the series:</p>
<ul class="grey-box">
 <li> <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">Part 1: Setting up a Dockerised installation of Ghost with MariaDB</a></li>
 <li> <a href="https://coderunner.io/deploying-ghost-on-digital-ocean-with-docker-compose/">Part 2: Deploying Ghost on DigitalOcean with Docker Compose</a></li>
 <li> Part 3: Syncing a Dockerised Ghost blog to Digital Ocean with automated backups </li>
</ul>
<p>Cool, let's finish our setup!</p>
<h4 id="guide">Guide</h4>
<ol>
<li><a href="#whyhavealocalenvironment">Why have a local environment?</a></li>
<li><a href="#takingamanualbackup">Taking a manual backup</a></li>
<li><a href="#addingautomatedbackupwithghostbackup">Adding automated backup with ghost-backup</a></li>
<li><a href="#puttingourbackupsindropbox">Putting our backups in Dropbox</a></li>
<li><a href="#restoringabackup">Restoring a backup</a></li>
<li><a href="#manualsync">Manual sync</a></li>
<li><a href="#usingghostsync">Using ghost-sync</a></li>
<li><a href="#testingtheworkflow">Testing the workflow</a></li>
<li><a href="#wrappingup">Wrapping Up</a></li>
</ol>
<h3 id="whyhavealocalenvironment">Why have a local environment?</h3>
<p>Before we continue, you might wonder why are we bothering with the local environment at all? We could use the Compose file we have already to <code>up</code> the stack on our Droplet, write and publish our posts there, and be done.</p>
<p>While that would be a valid approach (and might be all you need), there are some benefits to setting up our local environment:</p>
<ul>
<li>We can publish posts locally to check the formatting and how they will look in the wild (Ghost has a great live-preview of Markdown, but it is still not as good as a complete rendering, especially if you have custom CSS)</li>
<li>We can modify our theme and detect problems before pushing it live</li>
<li>We can work offline (or on a terrible Internet connection!)</li>
</ul>
<h3 id="takingamanualbackup">Taking a manual backup</h3>
<p>As we created a separate data-only container for our data, we could take a manual backup by running:</p>
<p><code>docker run --rm --volumes-from data-coderunner.io -v ~/backup:/backup ubuntu tar cfz /backup/backup_$(date +%Y_%m_%d).tar.gz /var/lib/ghost</code></p>
<p>This would fire up a new container using the <code>ubuntu</code> image, mount our data container volumes, and then create a compressed and dated tarball of the entire <code>/var/lib/ghost</code> folder into the <code>~/backup</code> folder mounted on our host. Nice.</p>
<blockquote>
<p>Taking a file dump of the database in this way should only be done while it is shutdown or appropriately locked, or you risk data corruption</p>
</blockquote>
<p>Once we have our tarball, we could restore it later with a similar method. This is okay, but we could do better.</p>
<p>We want to avoid having to shutdown our database for the backup, and we would be better off using dedicated tools to handle it. If you're running Ghost with sqlite there is the <a href="https://www.sqlite.org/backup.html">online backup API</a> and for mysql/mariadb there is <a href="https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html">mysqldump</a>. Also, it would be nice to have it automated.</p>
<p>For that purpose, I created <a href="https://github.com/bennetimo/ghost-backup">ghost-backup</a>, a separate container for managing backup and restore of Ghost.</p>
<h3 id="addingautomatedbackupwithghostbackup">Adding automated backup with ghost-backup</h3>
<p>The ghost-backup image is <a href="https://hub.docker.com/r/bennetimo/ghost-backup/">published</a> on Docker hub. We can use it by adding this to our <code>docker-compose.yaml</code> file:</p>
<pre><code class="language-yaml"># Ghost Backup
backup-blog-coderunner.io:
 image: bennetimo/ghost-backup
 container_name: &quot;backup-blog-coderunner.io&quot;
 links:
  - mariadb-coderunner.io:mysql
 volumes_from:
  - data-coderunner.io
</code></pre>
<p>This will create a ghost-backup container linked to our database container, taking a snapshot of our database and files every day at 3am and storing them in <code>/backups</code>.</p>
<blockquote>
<p>The database link needs to be named <code>mysql</code> as shown, as this becomes the hostname that the container uses to communicate with the database</p>
</blockquote>
<p>To change the defaults, for example changing the directory or backup schedule, you can <a href="https://github.com/bennetimo/ghost-backup#advanced-configuration">customise the configuration</a></p>
<p>Because the Docker linking system <a href="https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/#environment-variables">exposes</a> all of a source containers environment variables, the container can authenticate with mariadb without us having to configure anything.</p>
<h3 id="puttingourbackupsindropbox">Putting our backups in Dropbox</h3>
<p>At the moment our backups are just being created on the Droplet. We should have a copy stored offsite to at least get us the '1' in a <a href="http://blog.trendmicro.com/trendlabs-security-intelligence/world-backup-day-the-3-2-1-rule/">3-2-1 backup strategy</a>.</p>
<p>Seeing as we're dealing with a blog and not mission-critical data, one simple thing we can do is just push our backups to <a href="https://www.dropbox.com/">Dropbox</a>, and there is a <a href="https://www.dropbox.com/en_GB/install?os=lnx">headless Linux client</a> which makes this trivial.</p>
<p>We just need to set it up, and then change our backup location to use the Dropbox folder. Of course we're using Docker, so we should use a container for it! I put together a simple one in <a href="https://github.com/bennetimo/docker-dropbox">docker-dropbox</a>.</p>
<p>We just need to add it to our <code>docker-compose.yaml</code>:</p>
<pre><code class="language-yaml"># Dropbox
dropbox:
 image: bennetimo/docker-dropbox
 container_name: &quot;dropbox&quot;
</code></pre>
<p>And then add the Dropbox container volume to our ghost-backup container:</p>
<pre><code class="language-yaml"> volumes_from:
  - data-coderunner.io
  - dropbox
</code></pre>
<blockquote>
<p>The first time you launch the container, you'll see a <a href="https://github.com/bennetimo/docker-dropbox#quick-start">link</a> in the logs that you need to visit to connect with your Dropbox account.</p>
</blockquote>
<p>Finally, we tweak our ghost-backup config to use the Dropbox folder as it's location:</p>
<pre><code class="language-yaml">environment:
  - BACKUP_LOCATION=/root/Dropbox/coderunner
</code></pre>
<p>And we're done. All our backups will now make there way to Dropbox.</p>
<blockquote>
<p>By including the backup container in <code>docker-compose.yaml</code> it will be part of our local and live setup. As we'll see below that's what we want, but it probably makes sense to <a href="https://github.com/bennetimo/ghost-backup#disabling-automated-backups">disable automated backups</a> locally in <code>docker-compose.override.yaml</code></p>
</blockquote>
<h3 id="restoringabackup">Restoring a backup</h3>
<p>A backup is no use if we can't restore it, and we can do that with:</p>
<p><code>docker exec -it backup-blog-coderunner.io restore -i</code></p>
<p>This will present a choice of all the backup archives found and ask which to restore. Alternatively, we can restore by <a href="https://github.com/bennetimo/ghost-backup#by-file-restore">file</a> or by <a href="https://github.com/bennetimo/ghost-backup#by-date-restore">date</a>.</p>
<h3 id="manualsync">Manual sync</h3>
<p>Essentially a sync is a snapshot of our local environment, restored onto our live environment. As we now have our ghost-backup container configured on both, we could:</p>
<ol>
<li>Take a manual backup on the local environment (we can use <code>docker exec backup-blog-coderunner.io backup</code> for that)</li>
<li><code>scp</code> the created database and files archive to the Droplet</li>
<li>Restore the archives on the Droplet with <code>docker exec backup-blog-coderunner.io restore -f /path/to/file</code></li>
</ol>
<blockquote>
<p>For step 3 we would need to either use <a href="https://docs.docker.com/engine/reference/commandline/cp/">docker cp</a> to put the archives into the ghost-backup container, or mount a directory from the host to the container for our restore archives</p>
</blockquote>
<p>This approach would work, but it's a bit cumbersome and manual. With Dropbox setup we avoid step 2, but also have to check our sync folder until our files are ready for restore.</p>
<p>If our posts use a lot of images, our ghost files archive will also quickly become quite large to keep shipping around.</p>
<p>For simpler, 'one button' sync I created <a href="https://github.com/bennetimo/ghost-sync">ghost-sync</a>.</p>
<h3 id="usingghostsync">Using ghost-sync</h3>
<p>ghost-sync uses <a href="http://linux.die.net/man/1/rsync">rsync</a> to transfer the images, so it's incremental, only copying accross anything new. For the database sync, it piggybacks off ghost-backup.</p>
<p>To set it up, we first need to add it to our <code>docker-compose.override.yaml</code> (as we only want it locally):</p>
<pre><code class="language-yaml">sync-blog-coderunner.io:
 image: bennetimo/ghost-sync
 container_name: &quot;sync-blog-coderunner.io&quot;
 entrypoint: /bin/bash
 environment:
  - SYNC_HOST=&lt;dropletip&gt;
  - SYNC_USER=&lt;dropletuser&gt;
  - SYNC_LOCATION=&lt;syncfolder&gt;
 volumes:
  - ~/.ssh/&lt;privatekey&gt;:/root/.ssh/id_rsa:ro
  - /var/run/docker.sock:/var/run/docker.sock:ro
 volumes_from:
  - backup-blog-coderunner.io
 links:
  - backup-blog-coderunner.io:backup
</code></pre>
<p>There's a few things going on here, so let's break it down.</p>
<p>We have overridden the <code>entrypoint</code>, which is the command that is run when the image is started, to prevent a sync happening when we <code>up</code>.</p>
<blockquote>
<p>This <a href="https://github.com/docker/compose/issues/1896">issue</a> tracks potential support for services that can be configured not to auto-start in Compose</p>
</blockquote>
<p>We also mount an appropriate ssh private-key,  and also set some <code>environment</code> variables so we can make a connection to the Droplet. The <code>syncfolder</code> is where ghost-sync will rsync all of the images to.</p>
<p>Finally for the database sync we need to be able to talk to the ghost-backup container, so we add it as a link and a volume, and mount the docker socket.</p>
<p>Now we just need to make two small additions in <code>docker-compose.live.yaml</code>:</p>
<pre><code class="language-yaml">data-coderunner.io:
 volumes:
  - /sync/coderunner.io/images:/var/lib/ghost/images

backup-blog-coderunner.io:
 volumes:
  - /sync/coderunner.io:/sync/coderunner.io:ro
</code></pre>
<p>We mount <code>syncfolder/images</code> as the Ghost images folder in our data-only container, so we can rsync directly to it. And we mount the <code>syncfolder</code> again in the backup container, so that we'll be able to initate a restore of our database archive from there.</p>
<blockquote>
<p>ghost-sync can also sync the themes and apps folders with the -t and -a flags</p>
</blockquote>
<p>At this point we have a way to sync between our environments, let's test it out!</p>
<h3 id="testingtheworkflow">Testing the Workflow</h3>
<p>If you have followed everything up to this point, then you now have everything in place to enable our desired workflow.</p>
<ol>
<li>Create content at <a href="http://coderunner.io.dev/ghost">http://coderunner.io.dev/ghost</a></li>
<li>Once happy, run <code>docker-compose run --rm sync-blog-coderunner.io sync -id</code> to push the content live by syncing the database and images</li>
<li>View the content on <a href="http://coderunner.io">http://coderunner.io</a></li>
</ol>
<p>And we're done!</p>
<h3 id="wrappingup">Wrapping Up</h3>
<p>Now that you have a nice new Ghost blog setup, here's a few other things you might want to explore.</p>
<ul>
<li>Customising your theme</li>
<li>The default Casper theme is a nice starting point, but there are loads of great free (and paid) themes available at places like <a href="http://marketplace.ghost.org/themes/free/">Ghost Marketplace</a>, <a href="http://www.allghostthemes.com/">All Ghost Themes</a> or <a href="http://themeforest.net/category/blogging/ghost-themes">Theme Forest</a>. I have another little container <a href="https://github.com/bennetimo/ghost-themer">ghost-themer</a> which might be useful for trying some out.</li>
<li>Adding <a href="http://support.ghost.org/add-google-analytics-blog/">Google Analytics</a></li>
<li>Adding <a href="https://help.disqus.com/customer/portal/articles/1454924-ghost-installation-instructions">comments</a></li>
<li>Adding other blogs/services; our modular Dockerised setup means we can setup other things behind our reverse proxy nice and simply. Of course you might need to upgrade to a more powerful Droplet :)</li>
</ul>
<p>If you have any questions or suggestions about anything then feel free to leave a comment below :)</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Ghost on DigitalOcean with Docker Compose]]></title><description><![CDATA[In the last post (Part 1) we setup a new blog with Ghost and MariaDB, running behind an nginx reverse proxy, and all using Docker containers setup with Docker Compose.

Now we will get those containers running on a VPS with DigitalOcean so people can actually see our blog!]]></description><link>https://coderunner.io/deploying-ghost-on-digital-ocean-with-docker-compose/</link><guid isPermaLink="false">5bc4a358dc6f5d00018f800c</guid><category><![CDATA[docker]]></category><category><![CDATA[ghost]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[digitalocean]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Wed, 30 Dec 2015 15:58:46 GMT</pubDate><content:encoded><![CDATA[<p><em>Update: This blog series has been updated for Ghost 2.x. If you've landed here looking to setup a new Ghost blog, you should follow the <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">updated version</a>.</em></p><p>In the last post <a href="http://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">(Part 1)</a> we setup a new blog with <a href="https://ghost.org/">Ghost</a> and <a href="https://mariadb.org/">MariaDB</a>, running behind an <a href="https://www.nginx.com/">nginx</a> reverse proxy, and all using <a href="https://www.docker.com/">Docker</a> containers setup with <a href="https://docs.docker.com/compose/">Docker Compose</a>.</p>
<p><img src="https://coderunner.io/content/images/2015/12/DO_Logo_Horizontal_Blue.png" alt="Digital Ocean Logo"></p>
<p>Now we will get those containers running on a <a href="https://en.wikipedia.org/wiki/Virtual_private_server">VPS</a> with <a href="https://www.digitalocean.com/">DigitalOcean</a> so people can actually see our blog! If you have any comments or questions, feel free to leave them below :)</p>
<p>This is part 2 of the series:</p>
<ul class="grey-box">
 <li> <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">Part 1: Setting up a Dockerised installation of Ghost with MariaDB</a></li>
 <li> Part 2: Deploying Ghost on DigitalOcean with Docker Compose</li>
 <li> <a href="https://coderunner.io/syncing-a-dockerised-ghost-blog-to-digital-ocean-with-automated-backups/">Part 3: Syncing a Dockerised Ghost blog to DigitalOcean with automated backups</a></li>
</ul>
<p>OK, let's get our blog up and running on the Internet. As before, you can follow along by replacing any references to <code>coderunner.io</code> with your own domain.</p>
<h4 id="guide">Guide</h4>
<ol>
<li><a href="#whyavps">Why a VPS?</a></li>
<li><a href="#whydigitalocean">Why DigitalOcean?</a></li>
<li><a href="#launchingadropletvps">Launching a Droplet (VPS)</a></li>
<li><a href="#deployingourblogwithdockercomposeandgit">Deploying our blog with Docker Compose and Git</a></li>
<li><a href="#pointingourdomaintoourdroplet">Pointing our domain to our droplet</a></li>
<li><a href="#improvingtheworkflow">Improving the workflow</a></li>
</ol>
<h4 id="whyavps">Why a VPS?</h4>
<p>If a dedicated server is analagous to owning an entire house on the beach-front to yourself, and shared-hosting is like renting a room in the nearby hotel, then what's a VPS? It's like owning a caravan in the holiday park; you share some things like the shop and access to the beach, but mostly you have your own environment, and you're responsible for maintaining and looking after it.</p>
<div class="attributed-image">
<img src="https://coderunner.io/content/images/2015/12/800px-Caravans_at_beer_devon_arp.jpg" alt="Caravans at Beer, Devon. By Arpingstone" title>
<a href="https://en.wikipedia.org/wiki/RV_park#/media/File:Caravans_at_beer_devon_arp.jpg">Photo</a>
 by Arpingstone / <a href="http://creativecommons.org/licenses/by/2.0/">CC BY</a>
</div>
<p>A dedicated server would be overkill, at least in the beginning. But a shared-host might be underkill, as we would be competing for resources on the box with who-knows how many other sites, doing who-knows what? Back in our beach hotel, if the person next door is playing really loud music every night at 3am, we can't do much about it.</p>
<p>A VPS gives us a nice hybrid, where we're still sharing resources on a box but we're given a fixed allocation that is just for us; Our blog won't start performing poorly because someone else is hogging all the resources.</p>
<blockquote>
<p>If we  were hosting a simple static website (maybe using something like <a href="https://jekyllrb.com/">Jekyl</a>), then some other great options would be <a href="http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html">Amazon S3</a> or <a href="https://pages.github.com/">Github Pages</a>. As we're using Ghost we need more than static pages, but it's worth keeping in mind for other projects</p>
</blockquote>
<h4 id="whydigitalocean">Why DigitalOcean?</h4>
<p>There are a huge number of VPS providers out there, all with different pricing models and features. I went with DigitalOcean because they're aimed at developers and have a great <a href="https://www.digitalocean.com/community/">community</a>. It helps that all their servers are backed by fast SSDs starting at only $5 a month of course!</p>
<p><img src="https://coderunner.io/content/images/2015/12/digital-ocean-prices.png" alt="DigitalOcean pricing -fullwidth"></p>
<p>You could also check out <a href="https://www.linode.com/">Linode</a> or <a href="http://google.com/#q=vps+hosting">something else</a>.</p>
<blockquote>
<p><a href="https://www.quora.com/Which-is-a-better-host-for-personal-work-Linode-or-DigitalOcean">This Quora</a> question compares DigitalOcean and Linode.</p>
</blockquote>
<h4 id="launchingadropletvps">Launching a Droplet (VPS)</h4>
<p>A VPS on DigitalOcean is called a Droplet, and getting our blog up and running on one can be done before your cup of tea goes cold.</p>
<ol>
<li>First, you need to create an account. You can use my <a href="https://www.digitalocean.com/?refcode=c0b294deec25">referral link</a> if you want to get $10 credit :)</li>
<li>Now create a new Droplet, and use the one-click-app configuration that includes Docker to make life easy <img src="https://coderunner.io/content/images/2015/12/digital-ocean-choose-image.png" alt="Droplet with Docker -fullwidth"></li>
</ol>
<blockquote>
<p>The $5 droplet is fine to get going, and you can <a href="https://www.digitalocean.com/community/tutorials/how-to-resize-your-droplets-on-digitalocean">scale it up</a> later if needed</p>
</blockquote>
<ol>
<li>Do some one-time configuration for the Droplet, like setting up SSH keys and disabling root access. You can use their <a href="https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-14-04">guide</a> for that. You might also want to follow the <a href="https://www.digitalocean.com/community/tutorials/additional-recommended-steps-for-new-ubuntu-14-04-servers">recommended steps</a> for Ubuntu droplets.</li>
<li><a href="https://docs.docker.com/compose/install/">Setup Docker Compose</a> on the Droplet</li>
</ol>
<pre><code class="language-bash">curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` &gt; /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
</code></pre>
<p>Done, our droplet is now ready. Let's deploy our blog to it!</p>
<h4 id="deployingourblogwithdockercomposeandgit">Deploying our blog with Docker Compose and Git</h4>
<p>In the <a href="http://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/">last post</a> we created a <code>docker-compose.yml</code> and related files that describe all the components of our blog. The beauty of Docker is that we can use those same files to get our blog up and running on the Droplet, and it should work just as it did locally.</p>
<p>We could <a href="http://linux.about.com/od/commands/l/blcmdl1_scp.htm">scp</a>/<a href="http://linux.about.com/library/cmd/blcmdl1_rsync.htm">rsync</a> the files from our local machine to the Droplet, but we're after a more robust workflow that will allow us to easily make and track changes in the future.</p>
<p>Instead we can commit everything to a <a href="https://git-scm.com/">Git</a> repository. I'm using <a href="https://bitbucket.org/">Bitbucket</a> as they offer free private repos, but you could use <a href="https://github.com">Github</a> or something else too.</p>
<pre><code class="language-bash">cd directory-from-last-post
git init
git add .
git commit -m &quot;initial commit of blog!&quot;
git remote add origin remote-repository-url
git push origin master
</code></pre>
<blockquote>
<p>We're only committing our Docker files, not the entire Ghost installation.</p>
</blockquote>
<p>Now we have those files in the cloud, we just need to pull them down on our Droplet and start everything up. So SSH into your Droplet (you set that up <a href="http://coderunner.io.dev/deploying-ghost-on-digital-ocean-with-docker-compose/#launchingadropletvps">earlier</a>, right?) and then:</p>
<ol>
<li>Clone your repo</li>
<pre><code class="language-bash">git clone your-repo
</code></pre>
<li>Start it up</li>
<pre><code class="language-bash">cd your-repo
docker-compose up -d
</code></pre>
</ol>
<p>Great, our blog is now up and running on our Droplet!</p>
<div class="attributed-image">
<img src="https://coderunner.io/content/images/2015/12/tumblr_n2b1w9YF7O1qdabzno1_1280.png" alt="By oblyvian -fullwidth">
<a href="http://oblyvian.tumblr.com/post/79331277618/hi-guys-im-back-after-a-two-month-hiatus-it">Photo</a>
 by oblyvian / <a href="http://oblyvian.tumblr.com/faq">Licence</a>
</div>
<p>We can ping its <a href="https://cloud.digitalocean.com/droplets">IP address</a> to check it's responding:</p>
<pre><code>$ ping 46.101.81.204
PING 46.101.81.204 (46.101.81.204): 56 data bytes
64 bytes from 46.101.81.204: icmp_seq=0 ttl=59 time=24.215 ms
64 bytes from 46.101.81.204: icmp_seq=1 ttl=59 time=21.129 ms
</code></pre>
<p>Now we just need to update the DNS records for <code>coderunner.io</code> to point to the Droplet so that we can actually access the blog!</p>
<blockquote>
<p>As a quick test, you could add another entry in <code>/etc/hosts</code> as we did in the local setup, but using the Droplet's IP</p>
</blockquote>
<h4 id="pointingourdomaintoourdroplet">Pointing our domain to our droplet</h4>
<p>Assuming you're using DigitalOcean, setting up the DNS records is nice and easy by following their <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-a-host-name-with-digitalocean">guide</a>.</p>
<p>In a nutshell, we just need to update the DNS nameservers for our domain to use DigitalOcean's:</p>
<pre><code>ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com
</code></pre>
<p>And then configure the A and CNAME records in the <a href="https://cloud.digitalocean.com/networking#actions-domains">networking settings</a> of the account:</p>
<p><img src="https://coderunner.io/content/images/2015/12/namecheap-cnames.png" alt="DNS setup -fullwidth"></p>
<p>These two records ensure that the domain works with or without the www.</p>
<p>Now if we hit <a href="http://coderunner.io">http://coderunner.io</a>, we'll see the Ghost page, just as we did before. Nice!</p>
<blockquote>
<p>Remember that DNS records can take up to 24-48 hours to propagate, and if you followed the last post, then you'll need to remove the local entry in <code>/etc/hosts</code> first</p>
</blockquote>
<h4 id="improvingtheworkflow">Improving the workflow</h4>
<p>Now we have our blog running live on <a href="http://coderunner.io">http://coderunner.io</a>, but we want to be able to work on it locally too.</p>
<p>We could keep changing our <code>/etc/hosts</code> file to flip between the local and the live version, but that's a bit cumbersome.</p>
<blockquote>
<p>There are tools like <a href="https://github.com/2ndalpha/gasmask">Gas Mask</a> on OSX that make managing multiple hosts files easier, but it's still not ideal; we want to avoid having to change it at all.</p>
</blockquote>
<p>What would be better is to have <code>coderunner.io</code> pointing to the live blog, and <code>coderunner.io.dev</code> pointing to our local copy.</p>
<p>We can achieve this by taking advantage of <a href="https://docs.docker.com/compose/extends/#multiple-compose-files">multiple compose files</a>. All we need to do is modify our setup slightly.</p>
<p>Let's extract the part of the <code>blog-coderunner.io</code> configuration that changes depending on the environment, which is just a handful of environment variables, by creating two new .yml files.</p>
<p><strong>local:</strong> docker-compose.override.yml:</p>
<pre><code class="language-bash">blog-coderunner.io:
 environment:
  - VIRTUAL_HOST=coderunner.io.dev
  - NODE_ENV=development
</code></pre>
<p><strong>live:</strong> docker-compose.live.yml:</p>
<pre><code class="language-bash">blog-coderunner.io:
 environment:
  - VIRTUAL_HOST=coderunner.io
  - NODE_ENV=production
</code></pre>
<p>We set <code>NODE_ENV</code> so that we are using 'production' on our live site, as it is <a href="http://docs.ghost.org/pl/usage/configuration/#about-environments-">more appropriate</a>.</p>
<p>Finally we make a <a href="https://gist.github.com/bennetimo/6ddb288bf645abf76b38/revisions">very small modification</a> to the <code>config.js</code> file, so that the URL configured for Ghost has '.dev' appended if we're running in development.</p>
<p>Now when we are starting up locally we can use <code>docker-compose up</code> as before. That's because by default Docker Compose will look for both a <code>docker-compose.yml</code> and a <code>docker-compose.override.yml</code> file and merge them together.</p>
<p>On the Droplet though we need to make a small change to how we 'up' the stack, by using instead:</p>
<p><code>docker-compose -f docker-compose.yml -f docker-compose.live.yml up -d</code></p>
<p>This time we have to explicitly list the live .yml file as it is non-standard.</p>
<p>Finally, we will add one entry to our <code>/etc/hosts</code> for our local machine:</p>
<pre><code class="language-bash">localhost coderunner.io.dev
</code></pre>
<blockquote>
<p>Again if you're using <a href="https://docs.docker.com/machine/">docker-machine</a> then you want to use the IP of the virtual machine which you can find with <code>docker-machine ip default</code></p>
</blockquote>
<p>The directory structure now is:</p>
<pre><code class="language-bash">.
|-- data-coderunner.io
|   |-- config.js #[gist](https://gist.github.com/bennetimo/6ddb288bf645abf76b38)
|   |-- Dockerfile #[gist](https://gist.github.com/bennetimo/0ab18d783557438c6145)
|   `-- env_coderunner.io
|-- docker-compose.live.yml #[gist](https://gist.github.com/bennetimo/9fdafd238a7c404fcc39)
|-- docker-compose.override.yml #[gist](https://gist.github.com/bennetimo/70d6f93f6fd2350e6470)
`-- docker-compose.yml #[gist](https://gist.github.com/bennetimo/91ca871c3aaa2e7a148a)

1 directory, 6 files
</code></pre>
<p>We can commit these changes, pull them on our Droplet, and start everything up.</p>
<p>At this point we have our blog running both locally and on a VPS (Droplet) on DigitalOcean, great!</p>
<p>We can start to write some content on our local environment now, but we don't yet have a way to push it to the live site. We'll take a look at that in the <a href="https://coderunner.io/syncing-a-dockerised-ghost-blog-to-digital-ocean-with-automated-backups/">next post</a>, where we'll add sync, as well as backup and restore from <a href="https://www.dropbox.com/">Dropbox</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Hello, Blog! - An advanced setup of Ghost and Docker made simple]]></title><description><![CDATA[So you want to setup a nice new blog with a streamlined development workflow? Great, so did I! After spending some time ironing out a setup that works for me, I thought I'd share it.]]></description><link>https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple/</link><guid isPermaLink="false">5bc4a358dc6f5d00018f800b</guid><category><![CDATA[docker]]></category><category><![CDATA[ghost]]></category><category><![CDATA[docker-compose]]></category><category><![CDATA[mariadb]]></category><category><![CDATA[nginx]]></category><dc:creator><![CDATA[Tim Bennett]]></dc:creator><pubDate>Wed, 23 Dec 2015 20:29:57 GMT</pubDate><content:encoded><![CDATA[<p><em>Update: This blog series has been updated for Ghost 2.x. If you've landed here looking to setup a new Ghost blog, you should follow the <a href="https://coderunner.io/hello-blog-an-advanced-setup-of-ghost-and-docker-made-simple-2018/">updated version</a>.</em></p><p>So you want to setup a nice new blog with a streamlined development workflow? Great, so did I! After spending some time ironing out a setup that works for me, I thought I'd share it.</p>
<p><img src="https://coderunner.io/content/images/2015/12/ghost-docker.png" alt="Ghost &amp; Docker logos"></p>
<p>If you want a simple, back-to-basics blogging platform, then <a href="https://ghost.org/">Ghost</a> is a good choice. It is focussed on the content and making it look nice right out the box, and it is powering what you're reading now.</p>
<p>I'll detail my workflow step-by-step so that it you want to do something similar you can follow along, just replace all references to <code>coderunner.io</code> with your own domain :) I'm still making refinements, so leave a comment below with any suggestions!</p>
<p>I have split this up into three parts:</p>
<ul class="grey-box">
 <li> Part 1: Setting up a Dockerised installation of Ghost with MariaDB</li>
 <li> <a href="https://coderunner.io/deploying-ghost-on-digital-ocean-with-docker-compose">Part 2: Deploying Ghost on DigitalOcean with Docker Compose</a></li>
 <li> <a href="https://coderunner.io/syncing-a-dockerised-ghost-blog-to-digital-ocean-with-automated-backups/">Part 3: Syncing a Dockerised Ghost blog to DigitalOcean with automated backups </a> </li>
</ul>
<h2 id="thegoal">The Goal</h2>
<p>What we're shooting for:</p>
<ul>
<li>Ability to bring up/down the whole stack with a single command (we'll use <a href="https://docs.docker.com/compose/">Docker Compose</a> for that)</li>
<li>Front our blog with a <a href="https://en.wikipedia.org/wiki/Reverse_proxy">reverse proxy</a>, because we will be hosting it on a VPS and may want to have other blogs/apps on the same box</li>
<li>Simple to clone our environment for development (or to migrate to a different host in the future)</li>
<li>Let us create content first on our local environment and then sync it with our public host</li>
<li>Automated backup (and restore) to somewhere like <a href="https://www.dropbox.com">Dropbox</a></li>
</ul>
<p>Sounds good? Let's get started!</p>
<h2 id="partisettingupadockerisedinstallationofghostwithmariadb">Part I: Setting up a Dockerised installation of Ghost with MariaDB</h2>
<h4 id="guide">Guide</h4>
<ol>
<li><a href="#overview">Overview</a></li>
<li><a href="#directorystructure">Directory Structure</a></li>
<li><a href="#creatingadataonlycontainer">Creating a Data Only container</a></li>
<li><a href="#builditwithdockercompose">Build it with Docker Compose</a></li>
<li><a href="#setupourmariadbcontainer">Setup our MariaDB container</a></li>
<li><a href="#setupghost">Setup Ghost</a></li>
<li><a href="#putitallbehindnginx">Put it all behind nginx</a></li>
<li><a href="#startitup">Start it up!</a></li>
</ol>
<h4 id="overview">Overview</h4>
<p>Ghost can be setup with <a href="https://www.sqlite.org/">sqlite3</a> (default) or <a href="https://www.mysql.com/">MySql</a>/<a href="https://mariadb.org/">MariaDB</a>. I decided to use MariaDB so I have a fully featured RDBMS, and as it will fit in nicely to our modular Docker setup.</p>
<p>In this post we'll setup Ghost running in a Docker container, linked to a MariaDB container and fronted by <a href="https://www.nginx.com/resources/wiki/">Nginx</a>. To wire it all together, we'll use Docker Compose.</p>
<blockquote>
<p>Before getting started, you should have <a href="https://docs.docker.com/engine/installation/">Docker installed</a> and <a href="https://docs.docker.com/compose/install/">Docker Compose setup</a>. We'll set everything up locally and then deploy to a VPS in the next post.</p>
</blockquote>
<h2 id="directorystructure">Directory Structure</h2>
<p>So that it is clear up-front, this is the directory structure we'll be putting together:</p>
<pre><code class="language-bash">.
|-- data-coderunner.io
|   |-- config.js #[gist](https://gist.github.com/bennetimo/6ddb288bf645abf76b38/ed4f50cd4acda83f540e12bc6b7bb3267ea18d93)
|   |-- Dockerfile #[gist](https://gist.github.com/bennetimo/0ab18d783557438c6145)
|   `-- env_coderunner.io
`-- docker-compose.yml #[gist](https://gist.github.com/bennetimo/91ca871c3aaa2e7a148a)

1 directory, 4 files
</code></pre>
<p>We'll build up each one as we go, but I've added the gists so you can see the final result if you want.</p>
<h2 id="creatingadataonlycontainer">Creating a Data Only container</h2>
<p>The power of Docker comes from composing together single purpose containers to create your application. To fully embrace this we'll create a <a href="http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/">data only container</a> just to hold our data, and nothing more. Then we will be able to easily link the data volumes into any containers that need to access it, whether that's our Ghost container, backup container, or something else.</p>
<p>Here is the <code>Dockerfile</code> for our data container, which lives in the sub-directory <code>data-coderunner.io</code>.</p>
<pre><code class="language-docker">FROM ghost
MAINTAINER Tim Bennett &lt;tim@coderunner.io&gt;

# Create required volumes
VOLUME [&quot;/var/lib/mysql&quot;, &quot;/var/lib/ghost&quot;]

ENTRYPOINT [&quot;/bin/bash&quot;]
</code></pre>
<p>It's pretty uninteresting, we just inform Docker that we want to mount the mysql and ghost directories. I'm basing off the Ghost image so it reuses the same layers as the Ghost container that we'll need later, to alleviate <a href="http://container42.com/2014/11/18/data-only-container-madness/">container madness</a>.</p>
<blockquote>
<p>As of Docker 1.9.0 there is a new <a href="https://docs.docker.com/engine/reference/commandline/volume_create/">Volumes API</a> which it would be nice to use here, but it is <a href="https://github.com/docker/compose/issues/2110">not yet supported in Docker Compose</a>.</p>
</blockquote>
<h2 id="builditwithdockercompose">Build it with Docker Compose</h2>
<p>Now we want to build our data container, but instead of doing it manually we'll do it with <a href="https://docs.docker.com/compose/">Docker Compose</a>, by creating a <code>docker-compose.yml</code> file at the top level of the directory:</p>
<pre><code class="language-yaml">data-coderunner.io:
 build: ./data-coderunner.io
 container_name: &quot;data-coderunner.io&quot;
</code></pre>
<p>In this file we'll be declaratively listing all of the components that make up our stack and how they link together.</p>
<p>The <a href="https://docs.docker.com/compose/compose-file/#build">build</a> directive will create our data container, and we name it so we can refer back to it later.</p>
<blockquote>
<p>It's also possible to setup the data-container <a href="http://stackoverflow.com/questions/32908621/how-can-i-create-a-data-container-only-using-docker-compose-yml">directly</a> in Docker-Compose, but I prefer this approach</p>
</blockquote>
<h2 id="setupourmariadbcontainer">Setup our MariaDB container</h2>
<p>There's an officially supported <a href="https://hub.docker.com/_/mariadb/">image</a> for MariaDB which makes our lives easy.</p>
<p>All we need to do is add it to our docker-compose.yml:</p>
<pre><code class="language-yaml">mariadb:
 image: mariadb
 container_name: &quot;mariadb&quot;
 env_file: ./data-coderunner.io/env_coderunner.io
 environment:
  - TERM=xterm
 ports:
  - &quot;127.0.0.1:3306:3306&quot;
 volumes_from:
  - data-coderunner.io
</code></pre>
<p>There's a few things going on here. <code>volumes-from</code> references the data container we just created, so that MariaDB will be using the <code>/var/lib/mysql</code> mount point that we setup.</p>
<p>The ports mapping will bind port 3306 on our host to 3306 in the container, and we bind it for the loopback interface only. Otherwise we would be able to connect directly to the database container from the outside, but we want our only outside entry point to be our proxy that we'll create shortly.</p>
<p>We also specified an <code>env_file</code> with our db configuration:</p>
<pre><code class="language-bash"># MariaDB configuration
MYSQL_ROOT_PASSWORD=&lt;REDACTED&gt;
MYSQL_USER=tim
MYSQL_PASSWORD=&lt;REDACTED&gt;
MYSQL_DATABASE=blog
</code></pre>
<p>Finally, I'm setting the TERM environment variable so I can use the mysql tool to connect to our database, if needed.</p>
<h2 id="setupghost">Setup Ghost</h2>
<p>Next up we need to actually add Ghost, and we have an <a href="https://hub.docker.com/_/ghost/">official image</a> for that too, awesome!</p>
<pre><code class="language-yaml">blog-coderunner.io:
 image: ghost
 container_name: &quot;blog-coderunner.io&quot;
 volumes:
  - ./data-coderunner.io/config.js:/var/lib/ghost/config.js
 volumes_from:
  - data-coderunner.io
 env_file: ./data-coderunner.io/env_coderunner.io
 links:
  - mariadb:mysql
</code></pre>
<p>The only new things here are <code>volumes</code> and <code>links</code>, so let's just take a look at those.</p>
<p>Ghost uses a config.js file for <a href="http://support.ghost.org/config/">configuration</a>, and the Docker image will create one that can then be modified as needed. But we want everything to be dynamic, and pick up the fact that we're using MariaDB instead of sqlite automagically!</p>
<p>So, we'll use our own config.js that figures everything out using the environment variables from the containers. Here is the <a href="https://gist.github.com/bennetimo/6ddb288bf645abf76b38">gist</a>. Then, in the <code>volumes</code> section, we mount it straight to where Ghost is expecting it in the container.</p>
<p>Now we just need to add this to our <code>env_coderunner.io</code> file:</p>
<pre><code class="language-bash"># Ghost configuration
URL=http://coderunner.io
</code></pre>
<p>This is picked up in the config.js to configure the URL for Ghost.</p>
<p>The <code>links</code> entry tells Docker to create a tunnel between our containers by adding a <code>mysql</code> entry to the <code>/etc/hosts</code> file; Now our blog container can talk to our mysql container.</p>
<p>At this point we could fire up our blog, but we wouldn't be able to access it from our local machine as we're not exposing the ports. We will go one better than exposing the Ghost port directly, and setup <a href="https://www.nginx.com/resources/wiki/">nginx</a>.</p>
<h2 id="putitallbehindnginx">Put it all behind nginx</h2>
<p>By setting everything up behind an nginx reverse proxy, we can have multiple services (applications, other blogs etc) running on a single box and have nginx handling traffic routing between them. We could set this up manually, but there is already an awesome out-the-box Docker setup in <a href="https://hub.docker.com/r/jwilder/nginx-proxy/">jwilder/nginx-proxy</a>.</p>
<p>Now we're really starting to see the magic and power of Docker. We're building our application by sticking together components like making a house out of lego bricks!</p>
<div class="attributed-image">
<img src="https://coderunner.io/content/images/2015/12/8505316460_78d0abaf5b_b.jpg" alt="FlickrFriday: Keep it Simple. by elPadawan, on Flickr" title>
<a href="https://www.flickr.com/photos/elpadawan/8505316460/">Photo</a>
 by elPadawan / <a href="http://creativecommons.org/licenses/by/2.0/">CC BY</a>
</div>
<p>So we add this to our <code>docker-compose.yml</code>:</p>
<pre><code class="language-yaml">nginx:
 image: jwilder/nginx-proxy
 container_name: &quot;nginx&quot;
 ports: 
  - &quot;80:80&quot;
 volumes:
  - /var/run/docker.sock:/tmp/docker.sock
</code></pre>
<p>And that's all we need to create a fully-fledged reverse proxy! Now we just need to tell it the hostname that will map to our blog, by adding an environment variable to the blog container:</p>
<pre><code class="language-yaml">environment:
  - VIRTUAL_HOST=coderunner.io
</code></pre>
<h2 id="startitup">Start it up!</h2>
<p>In the main blog directory:</p>
<pre><code>docker-compose up
</code></pre>
<p>And we're running!</p>
<blockquote>
<p>On the very first launch the Ghost container might try to connect to MariaDB before it's finished setting up the database. To avoid it you can start MariaDB separately first with <code>docker-compose up -d mariadb</code>, or by using my <a href="https://hub.docker.com/r/bennetimo/ghost-wait-mysql/">modified image</a>. See <a href="https://github.com/docker/compose/issues/374">here</a> for more info.</p>
</blockquote>
<p>At the moment everything is on our local machine, so we can add an entry to <code>/etc/hosts</code> to simulate the domain setup.</p>
<pre><code class="language-bash">localhost  coderunner.io
</code></pre>
<blockquote>
<p>If you're using <a href="https://docs.docker.com/machine/">docker-machine</a> then you want to use the IP of the virtual machine which you can find with <code>docker-machine ip default</code></p>
</blockquote>
<p>Now we can fire up a browser and visit <a href="http://coderunner.io">http://coderunner.io</a>, and we're greeted with Ghost:</p>
<p><img src="https://coderunner.io/content/images/2015/12/ghost-welcome-screen.png" alt="ghost-welcome-page"></p>
<p>We now have a Ghost blog running, linked to a MariaDB container, and fronted by an Nginx reverse proxy, all running in Docker containers. Nice!</p>
<p>But, at the moment we're just running locally. In the <a href="http://coderunner.io/deploying-ghost-on-digital-ocean-with-docker-compose">next post</a>, we'll move this to a VPS on DigitalOcean so we're publicly accessible.</p>
]]></content:encoded></item></channel></rss>