Babel, Rails 5 and Sprockets 4 with Sprockets Commoner

by Sebastian Deutsch on August 9th, 2016
Comments Off

For some of our Rails projects we have replaced the Rails Asset Pipeline with Webpack and we’re quite happy with it. Webpack has so many nifty features and combining it with Babel we can write next generation JavaScript today.

But sometimes, especially for smaller projects such as our gymbot, you just don’t want Webpack. Having it would be an overkill. So what are our options if we’re looking for something leaner. What are our options? There is sprockets-es6 but it’s highly experimental and not very well maintained. Another option is to drink the kool aid and try out Sprockets 4.0.0.beta2.

Sprockets 4 got a major redesign so that tools like Babel can easily plugin into the system – also say hello to sourcemaps. But merely using Sprockets edge only gets us half the way – we would still missing things like npm controlled dependencies or different environments. This is where Sprockets Commoner comes into play. Sprockets commoner is meant as a replacement for Browserify or Webpack. Setting everything up is fairly easy. The first step would be to update your Gemfile:

The next step is to create a manifest in app/assets/config/manifest.js that declares how your assets shall be treated by Sprockets:

Then you setup a package.json (in your Rails root) where you include babel-core and all your application dependecies:

You probably also want to create a .babelrc (also in your root directory) to declare your babel transforms:

From then on you can use all the ES6 magic in your application.js:

If you want to have more control over Sprockets Commoner you configure it in an initializer:

In our application everything went pretty smoothly. Thanks to Shopify for releasing Sprockets-Commoner open source. If you also want to play with cutting edge technologies like ES6, Rails or Elixir feel free to apply at 9elements.


Retrospect RuhrJS

by Madeleine Neumann on August 3rd, 2016
Comments Off


Let’s wind back the clock for one and a half years. I just started my new job at 9elements and at that time I would have never give thought to organizing a conference.

But soon after I became a part of 9elements I took over the responsibility for the already existing user group PottJS and began to organize all upcoming events of this cosy meetup. The PottJS grew quickly, and we partnered up with several other companies that provided their working space so we could scale up the meetup and invite more people.

After one of those meetups at the office, I sat together with Robin Böhm and we were joking about creating a big JavaScript conference when he told me that he owned the domains for and Within a split second, our conversation shifted from fooling around to developing a serious idea. RuhrJS was born, and I promised Robin that I would give my all to make it happen.

But how can one organize a conference without any time and a whole lot less experience? Back then I was a student, only working a few hours per week at 9elements and as mentioned above -and that can’t be overstressed- without the slightest idea of how to organize a full-blown conference.

So I talked to Sebastian and Eray (Co-founders of 9elements & my bosses :)) trying to share my vision. Needless to say that both of them thought it would be nuts to try and do it on my own but after a long and productive discussion both of them agreed and allowed me to dedicate my working hours to organizing the RuhrJS. I would like to thank Sebastian and Eray for their trust and support; they gave me the chance to learn and outdo myself through that task.

The three of us agreed that we had to show the world that there is more to the Ruhr area than coalmining. We wanted to let everyone know that this is a flourishing metropolis full of great universities, cool meetups, and awesome companies to work for. And we wanted to be a part of the big JS-family from all around the globe, of course.

So, this is how we set out. Still, there was a long way to go. And here we return to our initial question: How to organize a conference? Since I couldn’t give myself an answer to that I needed to find people that would. And I found lovely and kind assistance. A heartfelt “Thank you!” goes out to Ola Gasildo for sharing her insights about attendee caring, code of conduct and diversity with me as well as for always being there to cheer me up, Robin and Katharina (Kida) Mehner for letting me take a look behind the scenes of RejectJS and giving me useful tips for taking care of attendees and finally Jan Lehnardt for inviting me to JSConf in Berlin and for showing me how a good conference has to be organized and held. Without them, I wouldn’t have been able to make RuhrJS nearly as great as it was in the end.

I had all the information at hand and entered the planning phase. First up: Sponsors. I talked to a whole lot of different companies and asked them if they would be interested in sponsoring the first international JavaScript conference in the Ruhr area. You bet, it was frustrating. Most of them replied that they wouldn’t consider sponsoring a first-time conference. I, therefore, would like to thank all of our sponsors that did help me to create this event (it would be great if you would take a look at their websites, they’re all listed at the end of the article and they all search for future employees :D).

Sponsors, Check! Next up: Venue. We needed a venue with a stable internet connection, enough space for the attendees and a good caterer. Needless to say, that I wanted it to be in Bochum as well. So I reached out to Jahrhunderthalle but unfortunately, after I got their first offer, I had to realize that it would be too expensive for us. So, I reached out to other venues and eventually found Jahrhunderthaus. It was the perfect fit for our conference. The venue is beautiful with a lot of space, a good caterer and everything that we needed for the conference (well, sort of, but we’ll get back to that).

Alright, we had our sponsors, and we had a venue, everything seems pretty good so far. But the essential part of a conference was still missing: the speakers. We started a Call for Papers and 110 people submitted their talks. After we closed the CFP, we let our early bird ticket buyers vote on which talks they’d like to hear at RuhrJS, and an amazingly high percentage of around 80% of our ticket buyers took part in our voting process (big shout out to you folks). So, piece of cake from now on right? Just contact and invite the speakers, book flights, and hotels and finally hold the conference. Unfortunately, it’s not that easy, turns out that a lot of possible mistakes still lie ahead, ready to be made. Rest assured, concerning the organization of the traveling and accommodation for our speakers I nearly made them all. But eventually it all went well, and at least I got a good idea of what to avoid the next time (seems like the right moment to thank our speakers for their patience).

I wanted a good video team for the RuhrJS, so we can later upload the talks to YouTube. I knew Nils from OTSConf, who did an incredible job filming the talks, and so, I booked him ( We met at the Jahrhunderthaus to check on the technical equipment (remember from earlier, everything was supposed to be there) and realized that we would have to replace the whole technical stuff. And the bad news didn’t stop there the WiFi was the next big setback. When I talked to the folks from Jahrhunderthaus, they told me that their WiFi could handle 400 devices.  But when Dominik from rrbone checked the internet connection he found that it was only 2 Mbps So, we had to book another provider so we could provide our guests with a stable WiFi and apart from the fact that it was sometimes a bit slow, everything worked well. Big thanks to Dominic and his team, great job! (

Now, finally it was all set, and this is how RuhrJS 2016 went down: We started on Friday with an opening party at the “Bergwerk” in the Bermuda 3-Eck (translates to Bermuda Triangle). It was a great kickoff for the RuhrJS; everybody had a lot of fun.

On Saturday the actual conference began. I barely slept the whole week (I guesstimate only 8 hours in 6 days). Though deprived of sleep I enjoyed every second. The first day was awesome, a lot of attendees showed up early in the morning and get awesome coffee sponsored by Neopoly, listened to the speakers, talked to our sponsors or other attendees, and enjoyed an excellent breakfast and lunch. At 9 p.m. we entered a club in Bochum called the RIFF and had a great party until the early morning hours (for everybody who’s been there: again, I’d like to apologize for the incidents with the security staff, I made sure that this won’t happen again next year).

Finally, it was Sunday, and I was completely exhausted. Just like on Saturday we had a blast. And then I realized that I did it. What a great feeling.



So, what’s left to say about RuhrJS 2016? I am overwhelmed. I am exhausted. I am ready for the next one. Our attendees and speakers were amazing! A lovely crowd, awesome talks and great sponsors.

But what I’m most proud of is our diversity program. With the help of our sponsors, we were able to invite 40 people from underrepresented groups and could even provide two full scholarships. That means that 20% of our attendees were invited.

Almost everyone asked me whether I would like to organize the RuhrJS once more in 2017. At first, I was like: Hell no! But now that I had 20 hours of good night’s sleep, I have to say: ABSOLUTELY YES!

If you would like to be part of the next RuhrJS, just drop me a note :)



9elements GmbH


Gold Sponsors:

•    Crosscan (they also spend money for a full scholarship, wich is awesome!)

•    5Minds

•    GBTEC


Silver Sponsors:

•    getit

•    Twilio

•    Mehrkanal

•    Hufgroup – Sixsense


Bronze Sponsors:

•    Railslove

•    hack.institue

•    Spronq

•    Setlog

Awesome Coffee Sponsor:

 •   Neopoly




•    Wirtschaftsförderung Bochum

•    rrbone (best Wifi ever!)

•    Bermuda Digital Studio (Pre-Party Sponsors)

•    Zalando (Main-Party Sponsor)


Diversity Sponsors:

•    View Source Conference

•    Crosscan 


Speaker Travels Sponsors:

•    Thoughtram

•    Thinkmill

Install “Let’s Encrypt” with NGINX and Phusion Passenger

by Sebastian Deutsch on April 14th, 2016
Comments Off


We all need more SSL! But installing SSL certificates is a big PITA. “Let’s Encrypt” is a new certificate authority (CA) offering free and automated SSL/TLS certificates. Certificates issued by Let’s Encrypt are trusted by most browsers in production today (including even some filthy ones like the Internet Explorer on Windows Vista).

Installing “Let’s Encrypt” is fairly easy:

Preparing for Domain Validation

“Let’s Encrypt” validates the domain by requesting a public file from your server. To add that file you need to adapt your nginx.conf:

and then restart your NGINX.

Generate the certificates

this should output something like this:

If it doesn’t try to put a file inside the “.well-known” directory and try to get.

Finally you should have the certificate and the corresponding private key in your /etc/letsencrypt/live/ directory.

Configuring NGINX to use the certificates

Now configure your NGINX server:

and then restart your NGINX.


This command is OSX only but it helps to test various SSL diagnostics

All results should PASS.

Automatic renewal

Let’s Encrypt certificates are only valid for 90 days. This is why we create the following script in /opt/letsencrypt/ to renew them automatically and restart NGINX:

Create /var/log/letsencrypt/ if it doesn’t exist. And run crontab -e to run the script every two month:

Final thoughts

Working with Let’s Encrypt was pretty straightforward. In the beginning we were afraid that it won’t work directly with Phusion Passenger (e.g. where to put the .well-known directory) but actually that part was a breeze. All we have to say is “Let’s Encrypt”…

One drop of bitterness: Wildcard certificates are currently not supported – but we’ll stay tuned.

PS: 9elements is a worldwide recognized software development consultancy. We work with techologies like React, Elixir and Ruby on Rails. If you like to play with new technologies: We’re hiring! If you just want to listen follow us on Twitter or like us on Facebook.

Changing versions of Elixir and Erlang

by schnittchen on January 19th, 2016
Comments Off

Version managers have been around in the ruby ecosystem for quite some time. You switch (cd) to a project using a different ruby version, and voilà, you are magically using the desired version of ruby when running ruby, rails or any other binary that runs ruby eventually. This is not only handy, but simply a necessity given that the code you write needs to run against a certain version of ruby in production.

Since Elixir is a fairly new language, we can expect interesting features to be added continuously for the time to come, and we need to make sure the code we write works in the environment it will run on in production. Hence, switching Elixir versions should be just as easy as switching ruby versions has been for us in the past. However, since newer versions of Elixir make use of features of the underlying Erlang runtime which is changing as well, switching Elixir versions sometimes makes switching Erlang necessary as well.

In the past couple of days I have looked at several possible solutions for this and want to share my experiences with you.

Mechanisms used by version managers

To accomplish their goal, version managers use variations of these mechanisms:

  1. The process environment is changed in the user’s shell. This may be as simple as prepending a path to the PATH variable used by the shell for looking up binaries to execute.
  2. Replacement binaries called shims investigate the environment or a configuration file and dispatch to the desired version

Note that a version manager may use a combination of these two, setting a custom environment to be picked up by shims, for example when the shell changes the directory for you. A version manager may offer

  • installation of different versions
  • switching the version explicity for the duration of a shell session
  • automatic switching (when cd’ing into a directory) using configuration files

and some offer more features specific to the tool they target.

Automatic switching is what makes day-to-day work within different projects simple and reproducable. Explicit switching offers flexibility, overriding the target version of a project for as long as I need to try something out.

Requirements for switching Elixir and Erlang

I really do not want to live without automatic switching of tools. Having to type a command to switch a version is already annoying (and error-prone), but having to do it twice because Erlang needs to be switched together with Elixir is just too much.

However in CI, I want the flexibility of explicit switching. Porting a large codebase to a newer version of ruby usually starts by switching over the CI and letting it run a couple of days, before everything else is switched over. In addition, I do not want to force a new version onto every developer of the team immediately, given that people need to get stuff done and installing a new version on a developer machine is not always as trivial as it should be. Hence I want to switch to a particular version explicitly in my build jobs.

Being able to build and install a new version and using it right away (in the same shell session) is also very nice to have. The build job I made a while ago for building newer versions of ruby and installing bundler (a ruby executable) on all build slaves has saved us a tremendous amount of time.

Oh, and it needs to support bash, because that is available on every machine I work with and always the default shell.

The candidates


evm is a nice and tiny version manager for Erlang. It allows to switch versions for a shell session, but supports no version file. It can install versions.

kerl seems to build and switch Erlang versions and also deploy OTP releases. I was intimidated by the number of features and configuration it offers and did not evaluate it further.


kiex installs Elixir versions and allows switching between for a shell session. It does not support a version file.

exenv looks like it is no longer being maintained.

Erlang plus Elixir

I really wish I could solve all my requirements with a single version manager, because that would seriously simplify the setup. Several projects target more than one technology and deserved a closer look. Sadly I haven’t found any of them to meet my requirements (but wait for the nice hack at the end of the article).

erln8 Installs and switches Erlang and Elixir.
Its dispatch mechanism relies on a version file in the current directory or above, which prevented Elixir from being installed because the installation process cd’s to a path below /tmp.

asdf has a plugin mechanism to support any kind of target, providing Erlang and Elixir (and ruby). Very promising! I could however not get it to switch tools for just a shell session.

The solution

No combination of version managers I evaluated offered an immediate solution to my requirements. I source-dived into different version managers and found out that all I needed was installation and switching for the duration of a shell session. Automatic switching turns out to be easily implemented on top, also it feels less fragile to take control of the auto-switching process instead of
different managers fighting over my shell.

These are the managers I currently use:

  • evm for Erlang
  • kiex for Elixir
  • chruby, as before, for ruby

I removed chruby’s auto-switching from my shell setup and replaced it with a few lines of code which manage all three of them. So far I am really happy with my new setup!

Here is how I build my Elixir environment inside the CI. This gist is a build job which installs versions of Erlang and Elixir, then hex, rebar and dialyzer, which it also builds the PLT for.

I learned a lot during the process, and even got some improvements to evm merged in. If you like, check out xvm if it works for you. Also, I would like to hear how you are dealing with switching Elixir/Erlang versions!

Quit your fucking Job! – Why we need to rethink German Company Culture

by Wojtek Gorecki on December 18th, 2015
Comments Off

Yeah, just quit it. Why? Because, most probably you can get a much better one!

Let me explain this bold statement in some detail. Working as a software engineer, I had the chance to take a peek into many different tech companies in Germany. In my conversations with my fellow developers I got the strong impression that German company culture is on average quite depressing. Here are the main impressions, that led to my conclusion:

  • Blame-focused
    Even more important than meeting a deadline, it is to have someone to blame in case you don’t make it. This leads to a vicious circle of ass covering and blame shifting.
  • Failure-focused
    When looking at your work, colleagues and supervisors will look for mistakes you made and focus on that. Making mistakes is unacceptable rather than an inevitable fact of life and work that has to be dealt with.
  • Control-focused
    Employees are being monitored and micro-managed, a fact that your supervisor will let you know in more or less subtle ways.
  • Lack of motivation
    The majority of employees see their work just as a necessary evil thing to do to get money. That’s understandable since you are just a small cog in the machine and have not much say.

Does any of this sound familiar to you? Ok, so much for the bad parts. Now, let me list a few good parts from my experiences here at 9elements and what I think a company should be like:

  • Trust-based
    Your boss sees that you are motivated and is sure that you are doing your best to make a great job. Your focus is not on staying long enough in the office but on reaching the aimed target. Your boss is there for you if you need guidance or have questions. Actually, they are a good friend! If you fuck something up, they will tell you, but that’s ok, cause failure is an important part in the process of getting better.
  • Flexible working hours
    And by flexible, I mean flexible. Some of my colleagues start at 8am, others start at 2pm. Everyone is responsible for their own hours and makes sure to be available when they are needed. Yep, it works!
  • Work-Life-Balance
    It’s not unusual that professional life mixes with private life. But in my humble opinion, you shouldn’t even think about the difference between the two. I don’t know about you, but I am mostly alive during work, so the whole work/life divide doesn’t make much sense to me. ;)
  • Passion
    I’m really lucky and proud to say, that everyone working at 9elements comes to the office every day because they love what they are doing. On a regular basis we have coding sessions after work and try out cool new stuff and work on internal tools and products.

So, what do you say? Wouldn’t you love to have less of the first list and more of the second list? In that case, I have something to tell you:


Why, you ask? How, you ask? Well, I have one more list for you:

  • Tech skills are in high demand
    Tech is an industry with an unbelievably high skill shortage. In Silicon Valley, you just need to wear a tech-shirt and headhunters will offer you a job. (This actually happened to a friend of mine!) So, don’t worry about finding a new job.
  • Tech is well paid
    Ask your non-IT friends about their salary or check out some stats. Even in a small or averaged sized company, you should get reasonable compensation.
  • Choose wisely
    Sitting in your job interview, don’t forget that those people on the other side of the table are also applying for the job of being your employer! Try to figure out, if they want you to work for them or with them. Btw, you are not looking for a new job, you’re looking for a new mission.
  • It’s evolution, baby!
    If you ask me, it’s just a matter of time until the industrial age company model will go extinct and trust-based company cultures will become the new standard situation. This global paradigm shift we are experiencing right now is moving our focus from earning money to self-actualization. If you are interested in such topics of cultural science, you can look it up yourself. You could start here and here. You should also read this brilliant article by Gustavo Tanaka.

If you want to see, how a trust-based company culture looks like, you are very welcome to visit us in our office in lovely Bochum, Germany.

And, what a coincidence: We are hiring! ;)

An Ember.js application with a Rails API backend

by Wojtek Gorecki on September 10th, 2015
Comments Off

Alright, fellow fullstack developers. In the last few weeks I had the chance to dive into Ember.js and I would like to give you a complete example of a blog application with Ember CLI for the frontend and Rails as backend server. This article contains lots of code. I will not explain all of it in detail. I’ll just reference the sources that helped me to understand the aspects shown here. You will need to have basic experience in Rails and JavaScript to walk through this.

Rails Backend

Let’s get the started. First we create our Rails backend server. We use the rails-api gem to generate the server.

Here, we generate the scaffold for posts and comments.

Don’t forget to add the has_many relation to post model.

To set up CORS we use a gem called rack-cors. It makes configuring CORS in a Rails project as easy as writing an initializer. So add this to your gem file:

Run the bundler to install the new gem.

Add here is the initializer:

Ember Data expects the transferred JSON data between frontend and backend to be in a certain format. To meet that format we have to update the controller actions in the posts controller and the comments controller. Read this and this to learn more about the JSON format in ember data and check this out as well.
Here’s the code:

To have some test data, just create a post record and a comment record in the Rails console.

And finally run your development server.

That’s it for the backend. The rest of this article will be all about the Ember application.

Ember Frontend

Alright, now let’s get to the really cool stuff. First of all you need to have ember-cli installed and then we’re moving on like this.

Ok kids, security is a very important issue, but to keep this demo quick and simple we’ll remove the following line from the package.json.

Learn more about the content security policy here and here.

You can configure the URL of your backend inside the application adapter. So run

to generate it and make it look like this

As you will probably know, this is the URL of your running Rails dev server. ;)

Now lets create models, templates and routes.

This will generate a bunch of files. I’ll leave it up to you to learn what is what. Check out the following links: Models, Controllers, Router Request Lifecycle, Routes, Templates.

Add titles to the following templates to see if the routing works correctly later on. Just replace the

with something like:

Now let’s update the router. The generators already added some routes, but I learned from Andy Borsz’s blog post that it should be more like this.

You can run the development server and check out the generated paths.

Install the Ember Inspector and visit the generated routes to see what already works.

Let’s move on. Now we add the model attributes according to the backend models.

Here comes first bit of functionality that actually reads data from the backend. Let’s implement the model function in the posts route. This will define what should be rendered in the post.hbs template. This and this will help you understand what happens here.

In the posts.hbs template we loop over the posts and render a simple li tag with the title and a link for deleting. We add a link to the ‘Post New’ page as well.

Check out the index page in the browser.

You should already see the first post we created in the rails console. The delete button should work as well.

Now, let’s create the the detail page for one post. Just update post.hbs to this:

Go to /posts/1 and see if it works!

And now let’s make the delete button work. Here is the post.js route.

Next we create a form to create a new post. This is the post/new.hbs template.

To implement the action handlers and save the form data to the backend, we need to update the post/new.js route to this:

Creating posts should work now. Go to /posts/new and try it out. Also, check the Rails logs to make sure the data is being saved correctly.

So far, so good. Are you still with me? We’re almost done. Moving on to the comments.

Here’s the template for the new comment form post/comment/new.hbs.

Now we have to implement the /post/comment/new.js route. It defines the model and handles the actions triggered in the comment form.

Read this to understand why we need the renderTemplate() function here.

You made it, you reached the end of this article. Creating posts and adding comments should work now. Yay! \o/

One last Note

I found it really exciting how fast and simple it has become to build a frontend application along with the backend server. In my opinion, Ember.js and Ember CLI in particular are indeed great tools to build ambitious web applications. You don’t have to put a puzzle together before you can start getting productive. On the other hand you spend quite some time trying to understand the Ember magic and why your code actually works. I hope this article helped you with your learning curve. ;)

Project launched: WEF Inclusive Growth Report 2015

by Sebastian Deutsch on September 8th, 2015
Comments Off


This week, the World Economic Forum launched “The Inclusive Growth and Development Report 2015″ with combined forces of 9elements and the help of our friends.

The report, which covers 112 economies, seeks to improve our understanding of how countries can use a diverse spectrum of policy incentives and institutional mechanisms to make economic growth more socially inclusive without dampening incentives to work, save and invest.

The Stack

9elements has done many data visualization projects in the past. For the OECD Data Portal, we’ve mainly used D3 and d3.chart. D3 is a great library for smaller or isolated visuals, but when the project grew larger and the code started to become difficult to maintain, incorporating mobile support on top was manageable but a daunting task. To avoid these sorts of problems in the future, we’ve decided to switch to a better front-end development stack.

For the main visuals we’ve chosen to render SVG with React.js. We love React.js for its fresh approach to write reusable web components and its blazing fast virtual DOM. The build system was based on Gulp.js for automation and Webpack for transpiling and packaging. We’ve used Babel to write ECMAScript 6 ECMAScript 2015 and compile to JavaScript that even older browsers understand. We’ve made heavy use of the new module syntax and ECMAScript 2015 classes to structure our code.

Not only did we want HTML components with React, but we also intended to incorporate the component approach with CSS: All CSS was developed using the BEM methodology and we’ve created some nice React and SASS helpers that speeded up our progress while keeping the CSS maintainable and sane.

Being able to export individual visualizations as PDF files was a fundamental requirement in this project. After all, we have to conclude that using React had the big advantage of being able to render all the visuals on the server and simply convert the HTML/CSS into PDF files using PrinceXML. With D3 that requirement would have become a nightmare and on top of that we would have had to use a very fragile stack with many components (like PhantomJS).

Bottom Line

With regard to the front-end stack we would definitely recommend using React.js with Webpack and Babel, especially with mobile usage in mind. If you like our work and have a project in mind feel free to contact us.

The ethereal Frontier

by Nicolas Luck on August 14th, 2015
Comments Off

Some of you may have heard of the new big thing: Ethereum.


You may have heard that Ethereum's co-founder Vitalik Buterin was awarded with $100k within the Peter Thiel fellowship programme. That Ethereum which pre-sold it’s cryptocurrency, the Ether, last year gaining $18 million in a self-made crowdfunding move. The same Ethereum that is sometimes called Bitcoin 2.0 and that aims to be the Web 3.0. Ethereum launched its production blockchain two weeks ago after bootstrapping a community and doing quite some testing on several proof of concept test nets.

At 9elements, we are quite curious about new technologies – especially of this scale. So we took a deeper look at Ethereum and got our hands dirty with mining Ether and writing smart contracts. There will be follow-ups to this blog post in which we’d like to show a hands-on approach on Ethereum contract code. But first we probably should answer the question: smart… what?!

Bitcoin was proof of concept for a new technology that is called blockchain. As the name suggests, it is about a chain of blocks. While this being a rather technical detail of its implementation, a blockchain is best described as a decentralized database. So what does that mean?

The values that are stored in a blockchain represent a consensus knowledge of all clients that are participating in this network. Every client that looks up a specific field in this database will find the same value. With blockchain technology, this is accomplished without having a central server that hosts or has any sort of authority concerning this database. Instead, the protocol that defines the interactions of its clients makes sure that this consensus about the blockchain’s values is distributed and synchronized among all clients and is protected against fraud.

Applied to the use case of currencies – like Bitcoin – a central authority like a bank is not needed any more for people to use the currency and make transactions. That’s why people call Bitcoin electronic cash. You don't need to trust anyone – not even your bank. (You just have to trust in cryptography…)

Now, Ethereum takes it even one step further…

While Bitcoin uses the blockchain only to store the amount of Bitcoins per wallet, one could imagine blockchains for all sorts of data. For example, there's also Namecoin which stores DNS entries on a blockchain. Ethereum uses its blockchain to even store code in it. By adding a virtual machine to the equation that executes the code stored in the blockchain during the mining process, Ethereum is best characterized as a decentralized computer

Wait, what?!

Ethereum introduces two kinds of accounts both of which are able to hold Ether (the currency in Ethereum). First, there are accounts that are controlled by a person via a private key. This is the same as with Bitcoin. In order to transfer Ether or send other transactions from this account to another, the private key’s owner needs to issue a transaction and sign it with their key (which is done by the client software automatically after the user has entered the key’s password).

Then there are accounts that are controlled by the code that is stored within that account. Every time a transaction is send to such an account, the miner that creates the next block containing this transaction runs the account’s code on the Ethereum Virtual Machine (EVM). Depending on that code, this account – also called smart contract – could respond to this transaction by sending a transaction itself, by doing nothing, or by altering values within its part of the blockchain, which is the equivalent of the hard disk in this computer analogy.

With this flexibility, most existing blockchain applications could be written inside and on top of Ethereum. For example, Namecoin could be implemented on Ethereum with the following contract code:

  1. if ![[0]]: # Is the key not yet taken?
  2.     # Then take it!
  3.[[0]] =[1]
  4.     return(1)
  5. else:
  6.     return(0) // Otherwise do nothing

It is easy to imagine how use cases such as crowdfunding or financial derivatives could be implemented based on Ethereum and smart contracts. Also voting mechanisms and all sorts of distributed organization structures are already popping up at the horizon. With the Ethereum client geth (which is written in Go) having an RPC interface and the Ethereum devs providing a JS library called web3.js to talk to geth, it is really easy to write apps that are interacting with the blockchain. Or to be more precise, apps that are talking to contracts that live on the blockchain. Applications that consist of a web/mobile/native based frontend and contracts living on the blockchain as the backend are called Dapps, distributed apps.

The genesis block of Ethereum’s production blockchain was launched on July 30th, 2015 using an interesting decentralized manner of creating this first portion of consensus. The current release is called Frontier and it is meant to be used by developers and early adopters. There is no GUI client yet, though Mist is already on its way and will probably part of the next release.

We have already tinkered with the blockchain, wrote our own contracts and got an impression of what Ethereum could be capable of – and we are quite impressed. There are already frameworks available which support coding of contracts and which we will talk about in our next blog posts.

So, after this short introduction, stay tuned! Practical Ethereum hacking hints coming soon!

Our NanoMCU arrived

by Jacob Dawid on July 13th, 2015
Comments Off

The NanoMCU is tiny, low-cost device featuring an ESP8266 SoC by Espressif Systems. At a price point of only 10 € (incl. shipping costs, price for 1 pc) this small all-in-one computer presents a strong offer. The technical specifications of the ESP8266 have been released to the general public by the chip manufacturer along with an SDK, which led to many different firmwares being developed within an extremely short amount of time. Hardware manufacturers developed similar devices like the MOD-WIFI-ESP8266-DEV by Olimex or the ESP8266 add-on module for Arduino.


The key features of ESP8266 based devices like the NanoMCU are the ready to use integrated WiFi chip, direct IO, low power consumption, low price and open specifications.

Developing for Ubuntu Phone

by 9elements on July 8th, 2015
Comments Off

Our Ubuntu Phone finally arrived! At the end of June Meizu released their latest Ubuntu Smartphone. With the MX4 being the first high-end Ubuntu Phone available on the market, we feel like we're ready for development.

Ubuntu Phone Meizu

Ubuntu Phone OS is using Qt as the development foundation framework, sporting QML, a brand new declarative language for designing state-of-the-art native speed applications capable of running on all major platforms. Speaking of today, we’re not sure if the Ubuntu Phone and QML will attract a larger market in the future, but it's a really exciting piece of technology. We have already experienced how ressource-efficient developing mobile apps with QML can be. In any case, we keep an eye on this interesting trend.