Install “Let’s Encrypt” with NGINX and Phusion Passenger


by Sebastian Deutsch on April 14th, 2016
Comments Off

ssl

We all need more SSL! But installing SSL certificates is a big PITA. “Let’s Encrypt” is a new certificate authority (CA) offering free and automated SSL/TLS certificates. Certificates issued by Let’s Encrypt are trusted by most browsers in production today (including even some filthy ones like the Internet Explorer on Windows Vista).

Installing “Let’s Encrypt” is fairly easy:

Preparing for Domain Validation

“Let’s Encrypt” validates the domain by requesting a public file from your server. To add that file you need to adapt your nginx.conf:

and then restart your NGINX.

Generate the certificates

this should output something like this:

If it doesn’t try to put a file inside the “.well-known” directory and try to get.

Finally you should have the certificate and the corresponding private key in your /etc/letsencrypt/live/9elements.com/ directory.

Configuring NGINX to use the certificates

Now configure your NGINX server:

and then restart your NGINX.

Testing

This command is OSX only but it helps to test various SSL diagnostics

All results should PASS.

Automatic renewal

Let’s Encrypt certificates are only valid for 90 days. This is why we create the following script in /opt/letsencrypt/renew-letsencrypt.sh to renew them automatically and restart NGINX:

Create /var/log/letsencrypt/ if it doesn’t exist. And run crontab -e to run the script every two month:

Final thoughts

Working with Let’s Encrypt was pretty straightforward. In the beginning we were afraid that it won’t work directly with Phusion Passenger (e.g. where to put the .well-known directory) but actually that part was a breeze. All we have to say is “Let’s Encrypt”…

One drop of bitterness: Wildcard certificates are currently not supported – but we’ll stay tuned.

PS: 9elements is a worldwide recognized software development consultancy. We work with techologies like React, Elixir and Ruby on Rails. If you like to play with new technologies: We’re hiring! If you just want to listen follow us on Twitter or like us on Facebook.


Changing versions of Elixir and Erlang


by schnittchen on January 19th, 2016
Comments Off

Version managers have been around in the ruby ecosystem for quite some time. You switch (cd) to a project using a different ruby version, and voilà, you are magically using the desired version of ruby when running ruby, rails or any other binary that runs ruby eventually. This is not only handy, but simply a necessity given that the code you write needs to run against a certain version of ruby in production.

Since Elixir is a fairly new language, we can expect interesting features to be added continuously for the time to come, and we need to make sure the code we write works in the environment it will run on in production. Hence, switching Elixir versions should be just as easy as switching ruby versions has been for us in the past. However, since newer versions of Elixir make use of features of the underlying Erlang runtime which is changing as well, switching Elixir versions sometimes makes switching Erlang necessary as well.

In the past couple of days I have looked at several possible solutions for this and want to share my experiences with you.

Mechanisms used by version managers

To accomplish their goal, version managers use variations of these mechanisms:

  1. The process environment is changed in the user’s shell. This may be as simple as prepending a path to the PATH variable used by the shell for looking up binaries to execute.
  2. Replacement binaries called shims investigate the environment or a configuration file and dispatch to the desired version

Note that a version manager may use a combination of these two, setting a custom environment to be picked up by shims, for example when the shell changes the directory for you. A version manager may offer

  • installation of different versions
  • switching the version explicity for the duration of a shell session
  • automatic switching (when cd’ing into a directory) using configuration files

and some offer more features specific to the tool they target.

Automatic switching is what makes day-to-day work within different projects simple and reproducable. Explicit switching offers flexibility, overriding the target version of a project for as long as I need to try something out.

Requirements for switching Elixir and Erlang

I really do not want to live without automatic switching of tools. Having to type a command to switch a version is already annoying (and error-prone), but having to do it twice because Erlang needs to be switched together with Elixir is just too much.

However in CI, I want the flexibility of explicit switching. Porting a large codebase to a newer version of ruby usually starts by switching over the CI and letting it run a couple of days, before everything else is switched over. In addition, I do not want to force a new version onto every developer of the team immediately, given that people need to get stuff done and installing a new version on a developer machine is not always as trivial as it should be. Hence I want to switch to a particular version explicitly in my build jobs.

Being able to build and install a new version and using it right away (in the same shell session) is also very nice to have. The build job I made a while ago for building newer versions of ruby and installing bundler (a ruby executable) on all build slaves has saved us a tremendous amount of time.

Oh, and it needs to support bash, because that is available on every machine I work with and always the default shell.

The candidates

Erlang

evm is a nice and tiny version manager for Erlang. It allows to switch versions for a shell session, but supports no version file. It can install versions.

kerl seems to build and switch Erlang versions and also deploy OTP releases. I was intimidated by the number of features and configuration it offers and did not evaluate it further.

Elixir

kiex installs Elixir versions and allows switching between for a shell session. It does not support a version file.

exenv looks like it is no longer being maintained.

Erlang plus Elixir

I really wish I could solve all my requirements with a single version manager, because that would seriously simplify the setup. Several projects target more than one technology and deserved a closer look. Sadly I haven’t found any of them to meet my requirements (but wait for the nice hack at the end of the article).

erln8 Installs and switches Erlang and Elixir.
Its dispatch mechanism relies on a version file in the current directory or above, which prevented Elixir from being installed because the installation process cd’s to a path below /tmp.

asdf has a plugin mechanism to support any kind of target, providing Erlang and Elixir (and ruby). Very promising! I could however not get it to switch tools for just a shell session.

The solution

No combination of version managers I evaluated offered an immediate solution to my requirements. I source-dived into different version managers and found out that all I needed was installation and switching for the duration of a shell session. Automatic switching turns out to be easily implemented on top, also it feels less fragile to take control of the auto-switching process instead of
different managers fighting over my shell.

These are the managers I currently use:

  • evm for Erlang
  • kiex for Elixir
  • chruby, as before, for ruby

I removed chruby’s auto-switching from my shell setup and replaced it with a few lines of code which manage all three of them. So far I am really happy with my new setup!

Here is how I build my Elixir environment inside the CI. This gist is a build job which installs versions of Erlang and Elixir, then hex, rebar and dialyzer, which it also builds the PLT for.

I learned a lot during the process, and even got some improvements to evm merged in. If you like, check out xvm if it works for you. Also, I would like to hear how you are dealing with switching Elixir/Erlang versions!


Quit your fucking Job! – Why we need to rethink German Company Culture


by Wojtek Gorecki on December 18th, 2015
Comments Off

Yeah, just quit it. Why? Because, most probably you can get a much better one!

Let me explain this bold statement in some detail. Working as a software engineer, I had the chance to take a peek into many different tech companies in Germany. In my conversations with my fellow developers I got the strong impression that German company culture is on average quite depressing. Here are the main impressions, that led to my conclusion:

  • Blame-focused
    Even more important than meeting a deadline, it is to have someone to blame in case you don’t make it. This leads to a vicious circle of ass covering and blame shifting.
  • Failure-focused
    When looking at your work, colleagues and supervisors will look for mistakes you made and focus on that. Making mistakes is unacceptable rather than an inevitable fact of life and work that has to be dealt with.
  • Control-focused
    Employees are being monitored and micro-managed, a fact that your supervisor will let you know in more or less subtle ways.
  • Lack of motivation
    The majority of employees see their work just as a necessary evil thing to do to get money. That’s understandable since you are just a small cog in the machine and have not much say.

Does any of this sound familiar to you? Ok, so much for the bad parts. Now, let me list a few good parts from my experiences here at 9elements and what I think a company should be like:

  • Trust-based
    Your boss sees that you are motivated and is sure that you are doing your best to make a great job. Your focus is not on staying long enough in the office but on reaching the aimed target. Your boss is there for you if you need guidance or have questions. Actually, they are a good friend! If you fuck something up, they will tell you, but that’s ok, cause failure is an important part in the process of getting better.
  • Flexible working hours
    And by flexible, I mean flexible. Some of my colleagues start at 8am, others start at 2pm. Everyone is responsible for their own hours and makes sure to be available when they are needed. Yep, it works!
  • Work-Life-Balance
    It’s not unusual that professional life mixes with private life. But in my humble opinion, you shouldn’t even think about the difference between the two. I don’t know about you, but I am mostly alive during work, so the whole work/life divide doesn’t make much sense to me. ;)
  • Passion
    I’m really lucky and proud to say, that everyone working at 9elements comes to the office every day because they love what they are doing. On a regular basis we have coding sessions after work and try out cool new stuff and work on internal tools and products.

So, what do you say? Wouldn’t you love to have less of the first list and more of the second list? In that case, I have something to tell you:

QUIT YOUR FUCKING JOB!

Why, you ask? How, you ask? Well, I have one more list for you:

  • Tech skills are in high demand
    Tech is an industry with an unbelievably high skill shortage. In Silicon Valley, you just need to wear a tech-shirt and headhunters will offer you a job. (This actually happened to a friend of mine!) So, don’t worry about finding a new job.
  • Tech is well paid
    Ask your non-IT friends about their salary or check out some stats. Even in a small or averaged sized company, you should get reasonable compensation.
  • Choose wisely
    Sitting in your job interview, don’t forget that those people on the other side of the table are also applying for the job of being your employer! Try to figure out, if they want you to work for them or with them. Btw, you are not looking for a new job, you’re looking for a new mission.
  • It’s evolution, baby!
    If you ask me, it’s just a matter of time until the industrial age company model will go extinct and trust-based company cultures will become the new standard situation. This global paradigm shift we are experiencing right now is moving our focus from earning money to self-actualization. If you are interested in such topics of cultural science, you can look it up yourself. You could start here and here. You should also read this brilliant article by Gustavo Tanaka.

If you want to see, how a trust-based company culture looks like, you are very welcome to visit us in our office in lovely Bochum, Germany.

And, what a coincidence: We are hiring! ;)


An Ember.js application with a Rails API backend


by Wojtek Gorecki on September 10th, 2015
Comments Off

Alright, fellow fullstack developers. In the last few weeks I had the chance to dive into Ember.js and I would like to give you a complete example of a blog application with Ember CLI for the frontend and Rails as backend server. This article contains lots of code. I will not explain all of it in detail. I’ll just reference the sources that helped me to understand the aspects shown here. You will need to have basic experience in Rails and JavaScript to walk through this.

Rails Backend

Let’s get the started. First we create our Rails backend server. We use the rails-api gem to generate the server.

Here, we generate the scaffold for posts and comments.

Don’t forget to add the has_many relation to post model.

To set up CORS we use a gem called rack-cors. It makes configuring CORS in a Rails project as easy as writing an initializer. So add this to your gem file:

Run the bundler to install the new gem.

Add here is the initializer:

Ember Data expects the transferred JSON data between frontend and backend to be in a certain format. To meet that format we have to update the controller actions in the posts controller and the comments controller. Read this and this to learn more about the JSON format in ember data and check this out as well.
Here’s the code:

To have some test data, just create a post record and a comment record in the Rails console.

And finally run your development server.

That’s it for the backend. The rest of this article will be all about the Ember application.

Ember Frontend

Alright, now let’s get to the really cool stuff. First of all you need to have ember-cli installed and then we’re moving on like this.

Ok kids, security is a very important issue, but to keep this demo quick and simple we’ll remove the following line from the package.json.

Learn more about the content security policy here and here.

You can configure the URL of your backend inside the application adapter. So run

to generate it and make it look like this

As you will probably know, this is the URL of your running Rails dev server. ;)

Now lets create models, templates and routes.

This will generate a bunch of files. I’ll leave it up to you to learn what is what. Check out the following links: Models, Controllers, Router Request Lifecycle, Routes, Templates.

Add titles to the following templates to see if the routing works correctly later on. Just replace the

with something like:

Now let’s update the router. The generators already added some routes, but I learned from Andy Borsz’s blog post that it should be more like this.

You can run the development server and check out the generated paths.

Install the Ember Inspector and visit the generated routes to see what already works.

Let’s move on. Now we add the model attributes according to the backend models.

Here comes first bit of functionality that actually reads data from the backend. Let’s implement the model function in the posts route. This will define what should be rendered in the post.hbs template. This and this will help you understand what happens here.

In the posts.hbs template we loop over the posts and render a simple li tag with the title and a link for deleting. We add a link to the ‘Post New’ page as well.

Check out the index page in the browser.

You should already see the first post we created in the rails console. The delete button should work as well.

Now, let’s create the the detail page for one post. Just update post.hbs to this:

Go to /posts/1 and see if it works!

And now let’s make the delete button work. Here is the post.js route.

Next we create a form to create a new post. This is the post/new.hbs template.

To implement the action handlers and save the form data to the backend, we need to update the post/new.js route to this:

Creating posts should work now. Go to /posts/new and try it out. Also, check the Rails logs to make sure the data is being saved correctly.

So far, so good. Are you still with me? We’re almost done. Moving on to the comments.

Here’s the template for the new comment form post/comment/new.hbs.

Now we have to implement the /post/comment/new.js route. It defines the model and handles the actions triggered in the comment form.

Read this to understand why we need the renderTemplate() function here.

You made it, you reached the end of this article. Creating posts and adding comments should work now. Yay! \o/

One last Note

I found it really exciting how fast and simple it has become to build a frontend application along with the backend server. In my opinion, Ember.js and Ember CLI in particular are indeed great tools to build ambitious web applications. You don’t have to put a puzzle together before you can start getting productive. On the other hand you spend quite some time trying to understand the Ember magic and why your code actually works. I hope this article helped you with your learning curve. ;)


Project launched: WEF Inclusive Growth Report 2015


by Sebastian Deutsch on September 8th, 2015
Comments Off

wef

This week, the World Economic Forum launched “The Inclusive Growth and Development Report 2015″ with combined forces of 9elements and the help of our friends.

The report, which covers 112 economies, seeks to improve our understanding of how countries can use a diverse spectrum of policy incentives and institutional mechanisms to make economic growth more socially inclusive without dampening incentives to work, save and invest.

The Stack

9elements has done many data visualization projects in the past. For the OECD Data Portal, we’ve mainly used D3 and d3.chart. D3 is a great library for smaller or isolated visuals, but when the project grew larger and the code started to become difficult to maintain, incorporating mobile support on top was manageable but a daunting task. To avoid these sorts of problems in the future, we’ve decided to switch to a better front-end development stack.

For the main visuals we’ve chosen to render SVG with React.js. We love React.js for its fresh approach to write reusable web components and its blazing fast virtual DOM. The build system was based on Gulp.js for automation and Webpack for transpiling and packaging. We’ve used Babel to write ECMAScript 6 ECMAScript 2015 and compile to JavaScript that even older browsers understand. We’ve made heavy use of the new module syntax and ECMAScript 2015 classes to structure our code.

Not only did we want HTML components with React, but we also intended to incorporate the component approach with CSS: All CSS was developed using the BEM methodology and we’ve created some nice React and SASS helpers that speeded up our progress while keeping the CSS maintainable and sane.

Being able to export individual visualizations as PDF files was a fundamental requirement in this project. After all, we have to conclude that using React had the big advantage of being able to render all the visuals on the server and simply convert the HTML/CSS into PDF files using PrinceXML. With D3 that requirement would have become a nightmare and on top of that we would have had to use a very fragile stack with many components (like PhantomJS).

Bottom Line

With regard to the front-end stack we would definitely recommend using React.js with Webpack and Babel, especially with mobile usage in mind. If you like our work and have a project in mind feel free to contact us.


The ethereal Frontier


by Nicolas Luck on August 14th, 2015
Comments Off

Some of you may have heard of the new big thing: Ethereum.

artboard_1

You may have heard that Ethereum's co-founder Vitalik Buterin was awarded with $100k within the Peter Thiel fellowship programme. That Ethereum which pre-sold it’s cryptocurrency, the Ether, last year gaining $18 million in a self-made crowdfunding move. The same Ethereum that is sometimes called Bitcoin 2.0 and that aims to be the Web 3.0. Ethereum launched its production blockchain two weeks ago after bootstrapping a community and doing quite some testing on several proof of concept test nets.

At 9elements, we are quite curious about new technologies – especially of this scale. So we took a deeper look at Ethereum and got our hands dirty with mining Ether and writing smart contracts. There will be follow-ups to this blog post in which we’d like to show a hands-on approach on Ethereum contract code. But first we probably should answer the question: smart… what?!

Bitcoin was proof of concept for a new technology that is called blockchain. As the name suggests, it is about a chain of blocks. While this being a rather technical detail of its implementation, a blockchain is best described as a decentralized database. So what does that mean?

The values that are stored in a blockchain represent a consensus knowledge of all clients that are participating in this network. Every client that looks up a specific field in this database will find the same value. With blockchain technology, this is accomplished without having a central server that hosts or has any sort of authority concerning this database. Instead, the protocol that defines the interactions of its clients makes sure that this consensus about the blockchain’s values is distributed and synchronized among all clients and is protected against fraud.

Applied to the use case of currencies – like Bitcoin – a central authority like a bank is not needed any more for people to use the currency and make transactions. That’s why people call Bitcoin electronic cash. You don't need to trust anyone – not even your bank. (You just have to trust in cryptography…)

Now, Ethereum takes it even one step further…

While Bitcoin uses the blockchain only to store the amount of Bitcoins per wallet, one could imagine blockchains for all sorts of data. For example, there's also Namecoin which stores DNS entries on a blockchain. Ethereum uses its blockchain to even store code in it. By adding a virtual machine to the equation that executes the code stored in the blockchain during the mining process, Ethereum is best characterized as a decentralized computer

Wait, what?!

Ethereum introduces two kinds of accounts both of which are able to hold Ether (the currency in Ethereum). First, there are accounts that are controlled by a person via a private key. This is the same as with Bitcoin. In order to transfer Ether or send other transactions from this account to another, the private key’s owner needs to issue a transaction and sign it with their key (which is done by the client software automatically after the user has entered the key’s password).

Then there are accounts that are controlled by the code that is stored within that account. Every time a transaction is send to such an account, the miner that creates the next block containing this transaction runs the account’s code on the Ethereum Virtual Machine (EVM). Depending on that code, this account – also called smart contract – could respond to this transaction by sending a transaction itself, by doing nothing, or by altering values within its part of the blockchain, which is the equivalent of the hard disk in this computer analogy.

With this flexibility, most existing blockchain applications could be written inside and on top of Ethereum. For example, Namecoin could be implemented on Ethereum with the following contract code:

PYTHON:
  1. if !contract.storage[msg.data[0]]: # Is the key not yet taken?
  2.     # Then take it!
  3.     contract.storage[msg.data[0]] = msg.data[1]
  4.     return(1)
  5. else:
  6.     return(0) // Otherwise do nothing

It is easy to imagine how use cases such as crowdfunding or financial derivatives could be implemented based on Ethereum and smart contracts. Also voting mechanisms and all sorts of distributed organization structures are already popping up at the horizon. With the Ethereum client geth (which is written in Go) having an RPC interface and the Ethereum devs providing a JS library called web3.js to talk to geth, it is really easy to write apps that are interacting with the blockchain. Or to be more precise, apps that are talking to contracts that live on the blockchain. Applications that consist of a web/mobile/native based frontend and contracts living on the blockchain as the backend are called Dapps, distributed apps.

The genesis block of Ethereum’s production blockchain was launched on July 30th, 2015 using an interesting decentralized manner of creating this first portion of consensus. The current release is called Frontier and it is meant to be used by developers and early adopters. There is no GUI client yet, though Mist is already on its way and will probably part of the next release.

We have already tinkered with the blockchain, wrote our own contracts and got an impression of what Ethereum could be capable of – and we are quite impressed. There are already frameworks available which support coding of contracts and which we will talk about in our next blog posts.

So, after this short introduction, stay tuned! Practical Ethereum hacking hints coming soon!


Our NanoMCU arrived


by Jacob Dawid on July 13th, 2015
Comments Off

The NanoMCU is tiny, low-cost device featuring an ESP8266 SoC by Espressif Systems. At a price point of only 10 € (incl. shipping costs, price for 1 pc) this small all-in-one computer presents a strong offer. The technical specifications of the ESP8266 have been released to the general public by the chip manufacturer along with an SDK, which led to many different firmwares being developed within an extremely short amount of time. Hardware manufacturers developed similar devices like the MOD-WIFI-ESP8266-DEV by Olimex or the ESP8266 add-on module for Arduino.

image20150713_144047225

The key features of ESP8266 based devices like the NanoMCU are the ready to use integrated WiFi chip, direct IO, low power consumption, low price and open specifications.


Developing for Ubuntu Phone


by 9elements on July 8th, 2015
Comments Off

Our Ubuntu Phone finally arrived! At the end of June Meizu released their latest Ubuntu Smartphone. With the MX4 being the first high-end Ubuntu Phone available on the market, we feel like we're ready for development.

Ubuntu Phone Meizu

Ubuntu Phone OS is using Qt as the development foundation framework, sporting QML, a brand new declarative language for designing state-of-the-art native speed applications capable of running on all major platforms. Speaking of today, we’re not sure if the Ubuntu Phone and QML will attract a larger market in the future, but it's a really exciting piece of technology. We have already experienced how ressource-efficient developing mobile apps with QML can be. In any case, we keep an eye on this interesting trend.


German Valley Week Review


by Sebastian Deutsch on June 12th, 2015
Comments Off

San Francisco

I just returned from my German Valley Week trip to San Francisco and the Silicon Valley. German Valley Week is an organized trip where entrepeneurs, investors and politicians from Germany visit disrupting startups ranging from new ones like Uber or Stripe to established companies like Google or Facebook. Each day we visited two or three companies and got an idea how they started, grew and eventually scaled out to unicorns.

Culture

The first thing you notice when visiting one of these companies is that they radiate a special company culture. Company culture is the most important thing besides having a great idea and kick ass engineers. John Collison puts it straight: "You want to work with enjoyable people. And nowadays companies don't hire the best talent. People are joining companies." A lot of these startups create workspaces that focus on self expressiveness and creativity. Some foster living a healthy lifestyle: most of them had a cafeteria that served fresh healthy food. The borders between working and living blur. Radiating the company values is important, so they have motivational posters or art in their offices. (Facebook: No problem at Facebook is someone elses problem).

Educate

All startups we visited try to keep their employees educated all the time. Teams present their learnings, often across departments. Sales learns from devs. Devs learn from sales. And it never stops. If you use the men's bathroom at Facebook there is a "Developer Learning Snippet" and a marketing update above or in front of the toilet. They have walls that reiterate what value means for their customers. They've streamlined the onboarding process for new employees to perfection. Most startups have multiple big infoscreens showing progress, traction and sometimes even sales data. This kind of communication embraces deep understanding of the business in all units.

Think big

There is a german rap song by Deichkind called "Denken Sie Groß" (Think Big). One line goes: "Don't build a terraced house, build a suburb... where you rule like a warlord. Think big!". It's true for everything that I've seen. Uber has big info screens that show realtime usage in every major city that they have expanded to so far. Facebook built a fucking Disney-esque campus to retain and entertain their employees. Sometimes a company has a very simple product - but it still takes a hundred engineers to improve it to stay ahead of the competition.

Thanks

These were my key impressions. There was more like meeting great people like Andy Bechtholsheim, Tim Chang and many more. It was overwhelming and too much to put it in one blog post. Big thanks to Kathrin Zibis, Chris Tegge and Nathan Williams for organizing such a great event. Also thanks to Stefan Peukert and Tom Bachem for nudging me come with you. I would definitively do it again.

 


Go in Production


by Sebastian Deutsch on March 6th, 2015
Comments Off

go_cover

Some of our projects are gaining traction lately. That's why we need to scale some parts of the infrastructure. 9elements started the search for a language that gives us more performance but that's also expressive and easy to write. Since Go is used by some high profile projects we decided to give it a shot.

Go is a very simple language: It is typed statically. It is garbage collected so you don't have to worry about memory issues. It has built-in support for concurrency. The most powerful feature is the absense of classic objects with methods. You have structs like in C. And you have functions that can have receivers so that they can operate on structs, but there is no direct concept of inheritance - though you can imitate it with anonymous fields. The most powerful concept are interfaces. Have you ever seen a bigger Java or C++ architecture where almost every class had an interface so that you can swap out the implementation. Go is like "Hey, let's get rid of the classes and just focus on interfaces!"

But Go also has some drawbacks: Coming from a Ruby background the language is not very expressive. You can do some meta programming using struct tags (it's like annotations in Java for structs) together with reflection, but the possibilities are limited. Especially if you're overusing struct tags your code becomes quite illegible. All in all I think that the pros weigh more than the cons so we decided to use it in production.

Since we're mostly building web applications it was a natural thing to build a web application. The next step was to check out the Go ecosystem to do so. We've taken a look at the following web frameworks:

  • Beego - Beego is a full stack web framework. It seems to be pretty popular but we were searching for something more lightweigt.
  • Revel - Same goes for Revel.
  • Martini - Martini is also one of the more popular Go web frameworks by Codegangsta. But it is abandoned by it's author due to that it's source is not very Go idiomatic.
  • Negrino - It's by the same author like Martini, but it is broken down to smaller components and written in Go idiomatic code.

Eventually we went with Negrino since it is lightweight and we didn't want to swap out a monolithic framework (Rails) with another monolithic framework. We wanted something small that does a few things really well. We went with GORM for the database management, which seemed to be the best object relational manager (didn't we say earlier doesn't have objects?) out there.

The application we've written is a small service that does autocompletion for German cities and ZIP numbers and it also provides the geolocations for it. We were able to drive down response times from 50ms to 7ms which is quite awesome. All in all it was a great experience to rewrite that service in Go and in the future we'll probably use Go as a great sidekick technology for Rails.

If you like what you're reading and you also want to play with these technologies - 9elements is hiring.