In my last post I made the point that eliminating waste -- superfluous code, stale pull requests, unused cloud resources -- is a worthwhile investment every engineer should make on a regular basis. It minimizes complexity, which in turn lowers maintenance costs and reduces communication overhead. It also allows you to better focus your thinking and, assuming that you care about your work, makes you feel less guilty.
I promised you to write a follow-up post on strategies that can help to eliminate waste during software development. I stand by my word; here are five things that work great for my team at Jimdo.
These days, at Jimdo, I'm part of a team that is responsible for the next-generation cloud infrastructure serving the 10+ million websites of our customers. Most of the time, we do pair
programming -- a very efficient way to grasp complex topics and communicate about them. This is especially true when legacy systems are involved and your pairing partner (hey, Soenke!) happens to be one of the company's first engineers.
In our pairing sessions we automate, debug, and tune the pieces that make up our infrastructure. We write lots of code and tests. We fix bugs. We try to keep things as simple as possible, while knowing that a certain amount of complexity is necessary for our systems to do anything useful. Of course, we do enjoy creating new things from time to time; we're programmers, after all. On the other hand -- and this is the crucial point -- we also care a lot about doing the opposite:
We love to reduce complexity by getting rid of components that aren't needed. This is a rewarding investment of time -- an investment every developer should make on a regular basis.
A rather simple, but effective and easy-to-setup service discovery (SD) mechanism with near-zero maintenance costs can be build by utilizing the AWS Private Hosted Zone (PHZ) feature. PHZs allows you to connect a Route53 Hosted Zone to a VPC, which in turn means that DNS records in that zone are only visible to attached VPCs.
Before digging deeper in the topic, let’s try to find a definition for 'simple service discovery'. I'd say in 99% of the cases service discovery is something like "I am an application called
myapp, please give me (for example) my database and cache endpoints, and service Y which I rely on", so the service consumer and service announcer need to speak a common language,
and we need no manual human interaction. This is at least how Wikipedia defines service discovery protocols:
Service discovery protocols (SDP) are network protocols which allow automatic detection of devices and services offered by these devices on a computer network. Service discovery requires a common language to allow software agents to make use of one another's services without the need for continuous user intervention.
So back to the topic. You might think: Why not use Consul, Etcd, SkyDNS etcpp?
no software is better than no software — rtomayko
You are not done with installing the software. You might need to package, configure, monitor, upgrade and sometimes deeply understand and debug it as well. I for one just simply love it when my service providers are doing this for me (and Route53 has actually a very good uptime SLA, beat that!) and I can concentrate on adding value for my customers.
This is another point. Keeping it simple is hard and an art. I learned the hard way that I should try to avoid more complex tools and processes as long as possible. Once you introduced complexity it’s hard to remove it again because you or other people might have built even more complex stuff upon it.
Ok, we are almost done with my ‘Total cost of ownership’ preaching. Another aspect for me of keeping it simple and lean is to use as much infrastructure as possible from my IaaS provider. For example databases (RDS), caches (ElastiCache), Queues and storage (e.g. S3). Those services usually don’t have a native interface to announce their services to Consul, Etcd etc. so one would need to write some glue which takes events from your IaaS provider, filters and then announces changes to the SD cluster.1
As software developers, we spend a lot of our time writing code. Whether we're implementing new features, fixing nasty bugs, or doing boring maintenance work, there's always some code we either create from scratch or try to modify for the better. When writing code, we need confidence in what we do. We need to know whether our changes work as intended. That's why we write tests after or, preferably, before touching any code. So, inevitably, a huge part of our daily work comes down to these two things: programming and testing.
Programming and testing are, of course, only vague terms, and their meaning depends on how exactly you approach software development. For example, when practicing test-driven development (TDD), you typically follow these steps:
Even if you don't practice TDD (you totally should!), you're probably still going to:
In order to write an actual program, you have to run through these steps -- this loop -- again and again. The point is that no matter what development method you prefer, constantly jumping back and forth between programming and testing comes at a price: it doesn't just slow you down in terms of time spent (especially with long-running tests), the involved context switching also drains your mental energy, which might ultimately destroy your productivity. For this reason I believe that the following statement is so important -- and I'm not tired of repeating it whenever I get the chance:
When it comes to programming and testing, fast feedback is everything.
Here, fast feedback means that the time between changing code (or tests) and getting results from running the tests is reduced to a minimum. In other words, you end up with a fast edit-compile-test loop (the compile step is optional, of course). The great thing about this, as Joel Spolsky put it: "the faster the Edit-Compile-Test loop, the more productive you will be".
There are a couple things you can do to shorten the feedback loop. Certainly the first technique that comes to mind is isolated testing, which involves eliminating (slow) external dependencies like databases. This topic has received a lot of attention lately and I won't go into it here (by the way, TDD isn't dead). You should absolutely invest in your tests and make them faster. Besides trying to implement faster tests, however, you can also optimize the way you run them.
At Jimdo, GitHub is a central part of our development workflow. We are using pull-requests to introduce new features or changes. As we deploy our production branch automatically, we have to ensure that all requirements are fulfilled before a pull-request gets merged.
Since Jimdo is available in eight languages, our translators are working hard to enable a native experience for users all around the world. We have switched from plain .po files to PhraseApp several months ago. In order to improve the translation workflow for the developers, we have build a tool that fits our pull-request based on translation process.
The translation process naturally includes lots of people, tools and time. For developers, the translation process might feel odd, as it mostly consists of simple but repeated steps: checking if new keys are needed / used, send the new keys with context, wait for all languages before deploying the change. When we thought about improving the process, we wanted the process to be really simple and easy to use.
As a developer I want to get a notification if I have introduced new translation keys.
I want to be able to deliver them to the translators easily.
I want to get notified if the translations are ready.
Time for another episode of The Jimdo Sessions. A few weeks ago, Robin Böhm visited us in our office in Hamburg. He's well known for having written the first German AngularJS book, which is already successful. At this point, we'd also like to thank him for sending us a copy. It's a great introduction to AngularJS, and if you prefer German books over English ones or you are just curious about what Robin has to share, make sure to check it out.
But enough of this, so we invited him to our office to share his thoughts about the expert levels of AngularJS. Because at Jimdo, we already use AngularJS in the latest parts (we will share
details when we have released that thingy). Of course, we also recorded his talk, so you can also enjoy it now. So here you go:
Wow, it’s the 9th Jimdo Session already. We came up with this thing really unintentionally, but everyone liked the format so we decided to keep it. For this edition Alexander Reelsen from
Elasticsearch joined us for a Friday, together with his two colleagues, Britta (Machine Learning expert) und Honza (author of The Python Client).
We met him at the Developers Conference Hamburg 2013 and invited him right away to join us for a Jimdo session, since we are always interested in meeting the people behind the products we
(A first draft of this blog post originally appeared on my personal blog: Wait for It…a Deep Dive in Espresso's Idling Resources)
One of the challenges developers have to face when writing UI tests is waiting for asynchronous computations or I/O operations to be completed. In this post I'll describe how I solved that problem using the Espresso testing framework and a few gotchas I learned. I assume you're already familiar with Espresso, so I won't describe the philosophy behind it but instead I'll just focus on how to solve that problem the Espresso way.