Zirconium Oxide Market Higher Mortality Rates by 2026

The Zirconium Oxide Market is currently experiencing significant growth as a result of its applications within the semiconductor, electronics, automotive and medical device industries. Zirconium…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Docker in development

Development on a remote server is not as tricky as it sounds. In fact getting a cheap VPS droplet or a cluster has some insane benefits over having a multitude of local binaries as interpreters or using Docker for Mac.

Here, we will take it a step further; Learn how to work in a container-first environment. Wherein we get full native linux performance, as well as 17 hour battery and zero fan spins. We will rent out a little droplet VPS and set up a remote development environment via VSCode as well as JetBrains’ IntelliJ Idea CE.

As always, the post is rather long as it depicts a research process, and if you don’t want to read it, the only thing you should remember is “Run your local dev env in the cloud, it is not as hard as it sounds”. Also, here is a table of contents:

I am not going to try to convince you to use containers in production. We are way past that. What I am looking into here, is a seldom mentioned effect they have on your teams cohesion and productivity as a provisioning tool — Docker in development, on a mac.

A couple of years back, I had participated in leading a team of 30 engineers of various seniorities working on a fairly large core product in a fairly large company. As a pilot project, I was tasked with (carefully) introducting Docker into our production.

I have spent a ridiculous, almost-autistic amount of time experimenting with real projects because synthetic benchmarks could not have been trusted. 2 stars, Do not recommend.

When people talk about Docker, they usually talk about the benefits it brings to production. However, nobody seems to talk about what it does to your team. I have seen it do wonders in development. Today I consider it crucial for any developer to understand and use.

Keeping a top layer or infrastructure as a part of your codebase in this way is healthy for everyone involved. It decouples the tools from one another and allows the team to more freely experiment with the setup and iterate super fast.

For me, dev/prod parity means feeling free to make changes to infrastructure, structure or configuration, and deploy it without obstacles. In a similar way that various automated tests and CI pipeline give you confidence that your changes will behave the same across environments.

The first “proud dad moment” for me was when I first saw a frontend engineer making a PR wherein they had moved a couple of complex build/assets folders and all related shell scripts, all on their own. A step they would never even attempt a year ago.

They now owned their stack, and were proud about it.

Anyway, Docker worked for us and we did feel some benefits. The company uses it to this day and even tho I am no longer a part of it, i hear that they are making plans for a wide adoption.

Time for inspiring & fun war stories, and my 2021 take on them…

It is 2021 now and I’d rather keep it under 5. Or less.

It is 2021 now and I’d rather keep my code behind a VPN and SSH key.

In 2016 I worked at a company that did a lot of 3D and VR work. My colleague Kole would keep everything in Dropbox as well. If he was working on a project that needed a powerhouse, he’d be able to just press Cmd+S and swap over to a more powerful desktop.

It is 2021 now and I’d rather scale up a VPS when i need to.

Rmember my team of 30 engineers? Unfortunately for everybody involved, it did not turn out that way and I was never satisfied with the performance and general usability of coding on my mac, when Docker was involved.

The deeper I went into the hole, the darker it got. At times, I felt like there was really no way out. As a company we’d have to buy 200 Ubuntu machines, and then teach everyone involved(and their codebases) to switch.

I found some ways around the problems, as seen in previous two parts of this series. But at the same time, it became even more evident to me how much engineering energy and time we were losing around insufficient tooling. I was absolutely convinced now that if we had an easy way to set up all the tooling, editors, codebases, binaries in a repeatable(but extendable) way, we could increase our velocity by at least 30%, and even put it on an upwards trajectory as the effects it will have on learning and mentoring would pick up, and return over time.

When championing a new tool in a company, the adoption must be effortless. It must be better than the previous system. The difference must be easily visible. The reasoning must be widely understood. Adopting Docker for Mac was none of these things for my team.

One of the mitigation strategies (outlined in Part#2) was to drop D4M and use a Parallels VM as hypervisor for a new docker context which allowed us to employ all of the crazy optimizations Parallels team had developed for Ubuntu over the years.

With this, we had a functioning Docker, without any tricks, and we regained our dev/prod parity. The performance was almost native, even with full sync over the notoriously tough Symfony cache or Node modules folders.

When you think about it, we are keeping our code in a VM, running it there, and synchronizing it with our own computer. We are running our code editors on the host. We use tricks like docker context, DNS, port forwards and remote interpreters. We talk to it over SSH and HTTP.
The VM is a remote machine, from the perspective of our host.

So a question poses itself — if we are already jumping through so many hoops, and using all of these tools, why don’t we just remove the VM entirely? Send it off to a remote location, somewhere cool and with lots of power, and use the same tools from above to connect to it.

MVP time! I slapped up the cheapest droplet on DigitalOcean, created some SSH keys, downloaded a couple of Elixir repos and started their docker projects. Okay that worked fine, as expexted, now what?

I quickly connected my instance of VSCode to the droplet via SSH, and selected the remote folder. Edited some code and reloaded the page. Okay, fair enough, works, as expected.

The price of remote development approach is comparable to buying a laptop. You can get the absolute best laptop on the market — a cheap M1 MacBook Pro. You have just saved a bit of money, depending on project and a team member it can vary between $ 500 and 1500.

We now rent out a droplet or an EC2 instance, and we pay for it anywhere between $15 and 50 per month. If you only pay for uptime, you will pay much less. Nevertheless, lets take the most expesive one, $50!

So, the money we have initially saved will now be spread across a full year of miniature monthly payments to the VPS provider (absolute worst case scenario).

A year goes by, maybe two, maybe even three. Your expenses have no spikes, people still don’t need new computers. Macs are notorious for the amount of time they stay competent. You encounter a more serious project? Scale the droplet!

My current laptop is a BTO MacBook. It had an insane price tag of almost $4000. In a remote development world, I would get a $900 M1 machine which holds battery for longer and heats much less, and a droplet much more powerful than my best of a laptop. I just got two much better tools, for $3000 off.

A droplet scales up until 160GB of RAM and 40 CPU cores. That is insane by any standards. Need that ML trained in 30 minutes? Press that proverbial Turbo button and spawn a monster droplet.

EC2 instances are even better (albeit harder to manage and predict) as you can have extremely specific instances, GPU optimized workloads and even save money on billing based around uptime.

Nowadays, my laptop never ventures above 20% CPU and lives its life at a steady 34 C. I have no problem keeping it in my lap anymore and I can do a whole workday without using a charger even once. Chrome spends more battery than my development activities (plug: which is why I recommend using Safari).

One might say that this will be hard for juniors to accept. I would not agree, in my experience senior developers put up a fight, juniors actually catch on pretty fast. Yes they may have to learn a bit more about using the CLI, and they may even screw up the whole server — so what? Unlike a local machine, you just spawn another one and within a minute they are back on track.

Also it is a phaenomenal learning opportinnity. Developers of all shapes and sizes will have a drivers seat on a real server! They will get to understand how to use SSH, where their code lives and how docker fits into that.

This may seem not as important to some people, but I would argue that a frontend developer getting to understand this concept is much more valuable as crosspolenation strategy than any code pairing workshop. And you only have to learn it once.

2 CPU for 20$ a month is a pretty good deal. Has enough RAM that your yarn install won’t take an eternity, or your compose install won’t fail when it hits garbage collection limits during dependency tree calculation. If you’ve got money to spare, I’d suggest a 8GB/4CPU setup, its worth the money.

Pick the datacenter closest to you. Latency is of no real concern to us, but why not. If offered the options, insert your SSH key and Monitoring. Name it some cute name, and create!

Access the VPS via SSH, and create an SSH key, add it to your GitHub account as you’ll need to be able to clone your repositories.

Update the package lists, packages and upgrade the system and install common software like git, zip, docker and docker-compose.

If you haven’t added an SSH key during setup, go and google it. DO has lots of tutorials on how to do that and disable password auth.

I like using the root user for this purpose. I know that this is a taboo and a stigmatized topic, but in this specific use case, there is absolutely no need for going above and beyond here. Remember, this particular machine runs nothing, and it is not publicly accessible.

There are resources like the blogpost below that provision these servers and harden them in the same way they would harden a production server.

DO has HTTP APIs, terraform support and even ansible scripting support. If you are handling this workflow in a company or enterprise capacity, you’d make a base snapshot image at this point and just spawn little droplets from it whenever needed.

In fact, when I plan on going on a vacation, I do a similar process — make a snapshot of the machine, and then destroy it. This way I archive it and don’t have to pay for it.

It is 2021 and editors are starting to recognize a need for this. As always, the old school editors like emacs and vim already work with this setup out of the box. Why? Because you can run them on the droplet or within the container itself, duh. In that scenario they already have access to all the code and the runtime, so…

When it comes to more modern editors, with a sour smile on my face (as i was a very late adopter) I’d recommend using VSCode. Remote editing capabilities it has, and the whole architecture around it fit much better into what I do.

VSCode has a built in “Attach to remote Container” capability. It spawns a real editor and you work directly with the native interpreter within the container.

From here we have access to the terminal and can clone or start projects. We won’t do any editing here however, as editing happens inside the containers themselves.

VSCode will respect any decisions made in the standard .vscode configuration files, so you can freely use them as you usually would! These files are usually commited within your projects and ensure that all team members use the same ruleset and editor settings for a project. Add dependencies and rules there, heres and example:

Lets take a quick look at how my editing workflow looks like. Its a bit easier to do this in a video, so here is a couple, hope you don’t mind:

What this gives me is a very nice way to move around different projects, start them, stop them, move files around, create dotfiles, config files or containers.

Now we can start our project via docker-compose. We can attach different editor windows to different containers.

In the project — specific windows, we have access to its runtime, and to its perspective on the filesystem. This also means that all the linting and parsing in the editor is done by the exact same runtime. I don’t have node installed locally on my host machine or on the VPS. Intellisense just works.

You can, if you wish, keep both runtimes in the same container. But i would highly advise you to separate them.

Just for fun, lets start a second, unrelated project for a different client, on the same VPS at the same time. It runs elixir and knows nothing about the other containers or their runtimes.

This means that each of our VSCode windows is actually highly specialized and minimal for each individual project and its runtime. Each of them has different settings (even colors), respects project’s local .vscode files, and has a different set of extensions running. Out of the box.

Update: I have decided not to pursue this approach. The experience on macOS is buggy as key bindings sometimes randomly decide not to work.

JetBrains has a different idea. They allow you to spawn an editor within a Docker container, and then use a Projector app or a browser to connect to it.

The editor does not run inside the project’s containers. Instead it runs on the host, and has access to host’s overview of the filesystem. You can install it as a binary directly on the VPS or spawn it as a separate container via Docker.

The workflow here is to connect to your droplet, start your project, start their Docker container based editor. Since all of your files are in sync with the VPS host, the editor will edit the files on the VPS filesystem.

However because JetBrains has invested a lot into being able to use remote interpreters, all of their “thick” editors are able to use a runtime from within a remote container. This means that you have the same capability of using the same runtime for the editor as you do for running the app. The editor will be able to connect to a container and use it’s runtimes for any compilation it does.

As you might imagine, this has some drawbacks. It is a bit harder to manage, and is not as clean as your developers seeing a container-first perspective on their code. However this approach is much more similar to what we would normally do with and editor, locally.

I have been a long — time user of JetBrains products. I always hated how bulky and overwhelming they can be, but at the same time, I know first hand that when dealing with PHP, Ruby, Java or Python, there is absolutely no better IDE. Over the past couple of years, however, I have gravitated more toward VSCode, especially with Python and PHP, as any loss in functionallity is quickly offset by the sheer ease of development i get.

Thank you for following through.

I would very much like to hear your thoughts on this. It is most definitely not perfect, but it is the best setup I have been able to make so far.

Easy to roll out, easy to scale, easy to destroy and easy to use.

One thing is certain, the tooling around this will continue getting better in the years to come, and we only have to gain from it!

Vi ses

Add a comment

Related posts:

Perfectionist Murders Innocent Man

The day had begun in no particularly unique way. It had, in fact, started the way every working day had started for the past ten years. Waking at 4:00 a.m., spending twenty-three minutes in the…

Benefits of Hiring Marketing Automation Agency Services

Marketing automation agency in Australia is rapidly growing because of the new challenges and opportunities that the country and its economy are presenting. This agency offers solutions to small and…

Mavani and the Creatures

Ancient Quatrians raised large birds similar to Guinea Hens/Grey Partridges, which feature in a number of their folktales, and even in some of the post-Diaspora cave art. One of the less well-known…