Visual storytelling using WebGL


by Daniel Hoelzgen on April 22nd, 2014

visual storytelling using webgl

Recently we were working on a redesign of the uformit website. Uformit, an online marketplace for personalized 3D design, was already presented at the 3D Printshow in London and New York, but never announced to the public. It features a WebGL powered product display, allowing the user to directly form the product by adjusting its parameters and see the result in real-time.

However, during the show, it was easily possible to explain in person how it worked, what’s the story behind it and what is so special about the technology that allows designers to create products that can be personalized to be a truly unique piece. We first thought about creating a video that would explain the whole process, but after a few discussions we quickly came up with the simple, yet obvious idea: If we have the product as a 3D object on the website, why not use exactly this to explain the story around it?

Our designer Kevin Kalde came up with a design idea we instantly fell in love with: He combined traditional visual storytelling techniques with the use of a WebGL rendered object that moves from page to page, allowing us to explain the different steps and aspects in the creation of a design on uformit without losing focus. And after developing a first prototype, we all agreed that it really felt right. Unfortunately, a few issues (dragons) came up…

 

Firefox: Works great (until it freezes)

Even without using background workers for loading the model, Firefox was the only browser which seemed to handle the model-loading without any jittering. Unfortunately, after some time we stumbled upon a huge issue: Under certain circumstances, it seems the browser (v28) is likely to freeze when the WebGL canvas is hidden and then re-appears, either by hiding it by code, or by scrolling past the element and scrolling back.

After checking out the current nightly build, we noticed that this bug seems to be fixed in version 29, which is scheduled to be released on April 29th. This way, we just decided to live with this problem for a few days and to disable features that cause the freeze where possible. Perhaps there would have been a way to mitigate the problem by doing some tricks, but sacrificing code quality and stability to mitigate an already fixed bug (and additionally wasting even more resources on the problem than we already did), we decided to just leave it for now.

 

IE: WebGL light

Internet Explorer up to version 10 is not able to display WebGL content at all. Although version 11 supports WebGL, there are a few things you cannot use, explained by Microsoft in this post. They show workarounds for some of the problems, but keep in mind that some libraries like three.js are not willing to adjust code due to missing features (which is basically a good idea), so you might have to keep an eye on this by yourself. However, Microsoft is working on getting everything in place.

 

Safari: So optimized your animation does not work

Looking at the numbers, it seems Apple did a great job in tweaking Safari’s performance. Trying to actually use this performance for implementing animations, you slowly get the impression they don’t do it faster, they just do less. Of course, it is well known that some methods are well suited when it comes to web animations and others aren’t, but in case you have to do calculations based on scrolling progress and position of other elements, you rely on having these updated very often as well. Even when using animation frames and getting acceptable high frame rates, the animation we implemented seemed to run not very smoothly, especially when moving synchronously with other elements on the page while scrolling. After investigating a little further, we recognized that the values we used for doing position calculations were just not updated very often, even though the page itself scrolls smoothly.

Additionally, Safari seems to sometimes have a buffer problem when changing the size of the WebGL context, causing a strange flickering effect during size-related animations, and causing real problems for things like a zoom function, causing the whole screen to flicker, looking like a bad designed computer malfunction in an old science fiction movie.

Of course, at least for position calculations, there are ways around this, like handling the scrolling by oneself, so at least everything moves synchronously slow, and of course this is not directly related to the use of WebGL, although having the WebGL container on the page does not exactly increase performance, so you run earlier into these issues. However, we decided to completely disable WebGL in Safari for this specific page, having a fallback page which is less interesting, but at least moves as fast and smooth as intended without being forced to do crazy tricks. We really hope that Apple works on this for the next updates of Safari – It’s correct that a browser should not necessarily try to be optimized for a certain type of pages, but forcing the developers in a specific direction by just not giving them the tools to build all kinds of webpages at the same quality cannot be the correct way, either.

 

Chrome: Thank you, Google

First of all, we used this browser during development, so perhaps there are things that work in other browsers and not in Chrome. However, there were no bad surprises – nothing behaving slow, no workarounds needed. It did not flicker, and did not display strange things. So: Thank you, Google!

 

WebGL – it’s almost there!

Despite the few problems we had during development, the feedback we got for this page really makes up for it. People not only liked the design of the page and the fact that the story builds up around a moving 3D object, it also really helped them to understand what uformit is about, and how the process we described on the page worked. So, yes, of course, we would do it again!

Do you have experience with WebGL for this kind of pages? I’m happy to hear from you on this blog or on Twitter.


Stripe vs. Paymill


by Sebastian Deutsch on February 28th, 2014

Stripe vs Paymill

As you might know 9elements is specialized in building digital products.

One of the tasks that comes with almost every product is payments – after all you want to earn some money. When it comes to payments there are plenty of options out there and choosing the right payment provider can be a tough job. For many the hurdle is not only technical but also economical. You might have to deal with credit card clearance contracts directly or you have to do a security audit to ensure that your service is PCI compliant.

In this blog post I want to compare Stripe and its German clone counterpart Paymill. Both services make it dead simple to integrate payments into your project. Both of these services are extremely developer friendly – no business hassle involved, just nice Rest APIs!

A little bit on the background: Stripe is a US based service that was launched in 2011. They marched out to disrupt the current defacto services like PayPal or Google Wallet. Paymill was launched by German clone-incubator Rocket Internet in 2012 and when we initially created Salon.io there was no other option since Stripe wasn’t available in Germany. Usually I’m not comfy using copycats but the guys at Paymill did a pretty good job to create an awesome product. They covered all the important features of Stripe and they added some good UI improvements, too. Stripe is open for many countries and currencies now and it’s time to review our technical choice. We used Stripe in a new project which will be released soon so we know both services – here are our learnings:

Subscriptions

Subscriptions are possible in both payment solutions. But to be honest the usage feels a bit more natural in Stripe. If you create a subscription an invoice object is also generated. An invoice in Stripe is not the same as the e-mail or PDF that goes out to the customer, it’s the data structure that can be used to generate this e-mail or PDF. Every time a payment recurs another invoice object is created and your application is informed via a webhook. Webhooks are URLs that are called if an event in Stripe occurs. A bigger advantage is that a lot of corner cases can be handled automatically. For example if a customer upgrades a plan from Silver to Gold you might want to give the customer a prorata discount for the new subscription. These things can be easily managed using the settings. At Paymill you have to take care of these things by yourself and set them up manually.

Coupons

Coupons are completely missing at Paymill. The options in Stripe are quite versatile since you can choose between redeem dimensions or time dimensions (once, multi month, forever). Since these coupons are API first class citizens you can leverage them as a marketing workhorse and don’t have to worry about all the calculations in your app.

Eco System

Both services are providing solid libraries and gems for every major programming platform like Rails or Node.js. Since Stripe got a bigger momentum a nice eco system has evolved around it. Especially for Rails there are battlefield proved engines that just work. We’ve evaluated Koudoku which is a fully fledged Rails Engine that handles Plans & Subscriptions – but eventually we find a bit too much (it even helped generating the views). Eventually we went with Stripe Event that just handles Stripe’s webhooks but it does what it is supposed to do and it’s quite lightweight. The Paymill ecosystem is not as substantial because for standard Rails projects there is no such a thing like Paymill Events – you have to deal with webhooks yourself. Side-note: There are ready to-use integrations for ecommerce projects that are using shop systems Spree or Magento. We haven’t tested those but they look quite solid.

When to use Stripe

If you need to market your product quickly, I’d definitely advise you to use Stripe since the Eco System and the momentum of innovations it is ahead of the competition.

When to use Paymill

If you’re in a country where Stripe is simply not available, like in Northern Europe or Turkey or if you want to process non credit card payments like the german ELV or the EU SEPA payments, Paymill is an option.

Tips (apply to both)

While developing you might want to test your stuff. Testing Stripe events like recurring payments can be really painful since you have to actually wait until a payment occurs again. Testing the webhooks in a live environment can be a tedious task. Luckily there is a handy service called Ultrahook that helps you to proxy callbacks to your local developing machine. If you have further questions then ping us on Twitter or write us an email.

 


In one of our recent projects, we needed to implement the ability to sign automatically generated configuration profiles for iOS and OS X [1] in the backend. If a configuration profile is signed and iOS / OS X successfully verifies its signature, the profile looks like the following:

Signed Configuration Profile

The user sees immediately that the profile is signed and thus can be trusted, because it was signed by a trusted authority (in this example COMODO). The requirements of the project dictated that the generated profile must be signed with a valid signature. However, when we initially tried to sign it with various approaches the signature verification always failed. After a little research, we found that the intermediate certificates necessary to perform a successful verification were not present in the configuration profile. Dick Visser’s article “Sign Apple mobileconfig files” [2] led us to the right direction. Signing the configuration profile on the command line integrates the intermediate certificates and allows the profile to be verified successfully.

$ openssl smime -sign -signer certificate.pem -inkey private_key.pem -certfile intermediate_certificates.pem -nodetach -outform der -in profile.mobileconfig -out profile_signed.mobileconfig

Our first solution looked like this:

def sign_mobileconfig(mobileconfig)
return `echo "#{mobileconfig}" | openssl smime -sign -signer "#{Rails.root}/config/keys/certificate.pem" -inkey "#{Rails.root}/config/keys/private_key.pem" -nodetach -outform der -certfile "#{Rails.root}/config/keys/intermediate_certificates.pem" -binary`
end

This spawns a separate process to invoke openssl. But we wanted to be able to sign the profiles with the OpenSSL API directly in Ruby, not by invoking an external program. Unfortunately, the documentation of OpenSSL itself and the documentation of the OpenSSL API in Ruby is very poor. Brian Campbell’s answer to the question “Digital signature verification with OpenSSL” on Stack Overflow [3] was the best explanation about how to sign a file with Ruby’s OpenSSL API we could find. However, the answer did not led us to a successful signature, because either the intermediate certificates were not present in the signed configuration profile or the signature creation failed (depending on how we configured the parameters of OpenSSL::PKCS7::sign).

The road to success is to bundle the (PEM-encoded) intermediate certificate and the root certificate in separate files, each certificate in its own file. Before invoking OpenSSL::PKCS7::sign, read all certificates and create OpenSSL::X509::Certificate instances. Create an Array with the certificate instances and provide it to OpenSSL::PKCS7::sign as the fourth parameter. The fifth parameter should be OpenSSL::PKCS7::BINARY. The following listing outlines the solution that fulfills our project’s requirements in a nice and clean manner.

signing_cert_data = … # Read from file.
signing_cert = OpenSSL::X509::Certificate.new(signing_cert_data)

private_key_data = … # Read from file.
private_key = OpenSSL::PKey::RSA.new(private_key_data)

configuration_profile_data = … # Read from file.

intermediate_cert1_data = … # Read from file.
intermediate_cert1 = OpenSSL::X509::Certificate.new(intermediate_cert1_data)
intermediate_cert2_data = … # Read from file.
intermediate_cert2 = OpenSSL::X509::Certificate.new(intermediate_cert2_data)
intermediate_certs = [ intermediate_cert1, intermediate_cert2 ]

signed_file = OpenSSL::PKCS7.sign(signing_cert, private_key, configuration_profile_data, intermediate_certs, OpenSSL::PKCS7::BINARY)

The full source code is available on GitHub [4]. We welcome any comments or suggestions to further improve it.


Webbzeug – procedural texture editor for your browser


by Carsten Przyluczky on October 18th, 2013

One night, shortly after we moved to our new office, Sascha and me where watching some demos on Youtube.You know those 64K executables you run, and think „how the heck they put all that content into 64K“ ?! Well one major idea is, procedural content. That means that the content is calculated, hence we don’t store what, but how. Let me add a simple example here. You want to store a texture that hold a glow pattern. A glow pattern i a circle that fades out to the border. Those can be used for particles, fake light glow and what not. So instead of storing it in to a JPEG we just remember, glow with radius, and fallof. At runtime we then render the glow before the demo starts. I think its clear that the parameters for the glow take a little fraction of the actual texture. Well as those textures or their generation can get very complex, some people code editors for them. And one of those is .Werkkzeug from Farbrausch. I showed that to Sascha, and he was amazed of the simplicity and the power that thing holds within. I told him, that I always wanted todo something like that as web-app. He said „lets roll“ and so we did. Webbzeug is our, still alpha, approach to create an easy and fast procedural texture editor for the web. Sascha coded the beautiful frontend, while I hacked those operations. Our first approach was to use canvas, due simplicity and compatibly. So we started and all was good, until we added blur and lightning operations. That where too slow. We tried some optimizations, but, still too slow. Our second approach for the backend was to use WebGL, with the help of THREE.js, a nice WebGL wrapper. And that is really fast! Realtime actually.

Lets get the party started

Enough talking I hear u say, okay lets roll. I will give u a quick guide and then we will open some sample and see what this tool can do !

The Basic concept you need to understand is called „operation stacking“. We have basic operations that we can combine to create some nice looking textures. So we have to select operations, set parameters and set an running-order. Thats why some people use graphically programming as buzz-word for that kind of gui.

So open  Webbzeug.com in your browser, right click on „generative“ and select „rect“. Now you have your first action glued to the mouse cursor. Drop it somewhere on the grid and hit space. Now you should see something like this

webb1

The little eye icon marks the observed action. Now hold shift, and click the action. Now the parameter dialog will popup. As you can see, for the rect action you may change the x and y position, the size, and the color. To change values u can either enter a value, or click in to the text field, hold left mouse button and move the mouse up and down to change values. Ok, set x and y position to zero. Now add another rect and drop it below the other rect. Hit space. Now you should see two rectangles. Note that the upper rectangle is now input for the lower one. Now comes the cool part. Click on the upper action holding shift, don’t hit space. And play around with the parameters. Now the rectangle, defined by the upper action should change. That means you can change parameters at any point in your script or program and see the changes at a certain point, namely the observed action. Without that feature, it would be very time consuming to create procedural textures.

To make things easier we divided the actions in three categories. Generative, those generate basic shapes like rectangles and circles, and also noise and other patterns. Processive, with these you can apply modifications such as deformation, change colors, or combination of actions. And last but not least Memory. Memory contains load and store. These actions allow you to store the result of a part of your script and reuse it with load at another point.

To get a better idea of what the webbzeug is able to, click on samples, select the hud element. After opening the lower cont/bri action should be selected, so you just need to hit space, and the whole thing should look like this

webb2

Feel free to step through the single actions, hit space and see what they do, and how the workflow can look like.

Conclusion

The current state is alpha. The thing still has some bugs, and the performance isn’t close to what I will be in the final version. But it fulfills it purpose as proof of concept.

So we hope you have fun with the webbzeug. If you do, tweet about it, spread the word, feel free to contribute on github .

Enough reading visit Webbzeug.com


In this article, I will first take a high-level look at modern frontend architectures: In a time where web apps easily surpass 1 MB of JavaScript, what should we try to achieve? Second, based on these considerations, I’m going to argue that Backbone.js should fully support the traditional HTTP URL scheme.

The ideal web site architecture

Today’s typical web site architectures can be placed between two extremes, one being traditional server-side logic, the other being JavaScript-only single-page apps. In between, there are hybrid approaches. Pamela Fox does a great job of describing these architectures and their pros and cons. She also introduces some key requirements from the user’s perspective: Usability, Linkability and Searchability/Shareability. In her presentation, she gives a quick overview of how the architectures perform. This outlines the current situation quite well.

How should a modern site work? There are several reasons why one should combine the best of all approaches: Server-side robustness with a client-side turbo-boost. In practice, we run into problems when trying to share logic between server and client. I think this is an engineering problem that can and will be solved in the future.

So what is the key to future architecture? I think it is Progressive Enhancement from soup to nuts. Progressive Enhancement is still useful and necessary. A typical site should be able to fulfill its basic purpose somehow even without JavaScript. A machine that speaks HTTP and HTML should be able to read a site. Of course, modern web sites aren’t about static content, but about user interactivity. But in most of the cases, there are resources with a static representation, either text, video or audio.

In order to achieve Searchability and also performance, content needs to be rendered on the server to some extent. Twitter learned this lesson the hard way in the “#NewTwitter” days, when they experimented with completely client-side architecture, but ultimately went back to serving traditional HTML pages for each initial request. Still, twitter.com is a huge JavaScript app. JavaScript operates on top of the initial DOM and then takes over to speed up subsequent actions. Hopefully, we’ll see this hybrid approach more and more in the future.

Rendering HTML on the server-side is considered valuable again. That’s because the traditional stack of HTTP, URL and HTML is simple, robust and proven. It can be incredibly fast. It works in every user agent; browsers, robots and proxies are treated uniformly. Users can bookmark, share and save the content easily.

Cool URLs are cool!

Used correctly, URLs are a great thing. Web development is centered around them: Cool URLs don’t change, URLs as UI, RESTful HTTP interfaces, hackability and so on. The concept of HTTP URLs dates back to 1994.

When “Ajax” appeared in 2005, people quickly realized that it’s necessary to reflect the application state in the URL. For a long time, JavaScript apps weren’t able to manipulate the full URL silently without triggering a server request. To achieve Linkability, therefore, many JavaScript apps are using “Hash URLs”. It’s safe to set the fragment part of the URL, so this became common practice. Most JavaScript libraries for single-page apps still rely on Hash URLs. Among others, Backbone.js uses Hash URLs per default in its routing implementation.

Today we know that Hash URLs aren’t the best solution. In 2011 there was a big discussion after Twitter and Google introduced Hash URLs and “Hash Bang URLs” in particular. Most people agreed that this was a bad hack. Fortunately, HTML5 History (history.pushState and the popstate event) makes it possible to manipulate the URL without leaving the single-page app. In general, Hash URLs should only be used as a fallback for older browsers.

If you use pushState, all URLs used on the client need to be recognized by the server as well. If a client sends a request such as GET /some/path HTTP/1.1, the server needs to respond with a page that at least starts the JavaScript app. In the end, making the server aware of the request path is a good thing. Instead of just responding with the code for the JavaScript app as a catch all, the server should better respond with useful content. In this case, traditional URLs enable Searchability and Shareability. Take for example an URL like this:

http://www.google.com/search?hl=en&ie=utf-8&q=pushState

These kinds of URLs are a well-established standard, widely supported, and can be handled on both the server and the client. So my conclusion is: Future JavaScript-heavy web sites may be “single page apps” because there’s only one initial HTML document per visit, but they still use traditional URLs.

Backbone.js and query strings

Backbone.js has a great History module that observes URL changes and allows to set the URL programatically. However, it doesn’t support traditional URLs completely. The query part (?hl=en&ie=utf-8&q=pushState), also known as query string, is ignored when routing. In this second part of the article, I’d like to discuss the ramifications of this missing feature.

Backbone treats /search?q=heaven and /search?q=hell as the same URL. This renders the query string useless. You can “push” URLs with different query strings, but if the user hits the back button, Backbone won’t consider this as a URL change, since it ignores the change in the query string.

Chaplin.js, an opinionated framework on top of Backbone.js, tries to work around this by parsing the query string into a Rails-like params hash. But it ultimately fails to support query strings because of Backbone.History’s limitation. Full disclosure: I’m a co-author of Chaplin.

The lack of query string support in Backbone is deliberate. The maintainer Jeremy Ashkenas decided against it. In several GitHub issues, he provides rationale:

From issue 891:

In the end, I think most Backbone apps should definitely not have query params in their app URLs — they’re a server-side URL convention that doesn’t have much useful place in client-side routing. So we shouldn’t be supporting them by default — but if you want this behavior, it should be easy enough for you to implement

From issue 2126:

Backbone shouldn’t be messing with the search params, as they don’t have a valid semantic meaning from the point of view of a Backbone app. If you want to use them (on a page that has a running backbone app), that’s totally fine …

In the most recent issue, Jeremy points out that this is not a browser compability issue:

From issue 2440:

wookiehangover: The thing that’s problematic about this (and why querystrings are ignored as of 0.9.9) is due to a handful of very weird but very real bugs with querystring processing and character encoding between browsers.

Nope. Not in the slightest ;)

The reason why querystrings are ignored by Backbone is because:

Querystrings only have a defined meaning on the server-side. The browser does not normally parse or otherwise handle them.

While querystrings are fine in the context of real URLs, querystrings are entirely invalid in the context of #fragment URLs. Most Backbone apps deal with fragment urls sooner or later — even if you’re using pushState for most of your users, IE folks will still have fragments. So querystrings can’t be used in a compatible way.

Better to leave them out of your Backbone app, and use nice URLs instead. If you must have them for the server side of the equation, that’s fine — Backbone will just ignore them and continue about its business.

Party like it’s 1994! *

I’d like to answer to these statements here. First of all, it’s great to hear that there are no major browser issues blocking full URL support. Jeremy argues against query strings on another level:

Querystrings only have a defined meaning on the server-side. The browser does not normally parse or otherwise handle them.

Honestly, I don’t understand this point. You can process a query string on the server, but you can do that on the client as well. There are cases where query strings are processed almost exclusively on the client, for example the infamous utm_ parameters for Google Analytics.

A URL is a URL. Wherever a URL appears, its parts have a defined meaning – there are Internet Standards which define the meaning. It doesn’t matter which software generates the URL and which processes it, a query string should have the same meaning.

While querystrings are fine in the context of real URLs, querystrings are entirely invalid in the context of #fragment URLs.

This assumes that Backbone apps use Hash URLs instead of pushState. Well, most of them do and that’s indeed a source of pain. But technically the query string ?foo=bar is entirely valid inside the fragment part of a URL.

A URL like http://dahl.example.org/#search?q=matilda may look weird, but it is completely in line with RFC 3986. With pushState, you don’t have to think about URLs in URLs. You can use URLs like http://dahl.example.org/search?q=matilda. This is the form of URLs that has been around since 1994, for good reasons.

… even if you’re using pushState for most of your users, IE folks will still have fragments. So querystrings can’t be used in a compatible way.

Well, they can be used in a compatible way. It’s technically possible to put path and query string into a fragment. It might violate the semantics of traditional URLs, but syntactically, it’s still a valid URL.

Better to leave them out of your Backbone app, and use nice URLs instead.

Jeremy argues that client-side apps should encode query params inside the path, like

http://dahl.example.org/#books/order=asc/sort=published/

That’s what he calls a “nice URL”. I beg to differ. In the spirit of 1994, why not stick to traditional URLs like:

http://dahl.example.org/books?order=asc&sort=published

I see no reason why JavaScript apps should invent new URL syntaxes. Today’s JavaScript apps are using pushState and properly accessible URLs. They should not and don’t have to differ from the URL conventions that have been used since the beginning of the web.

It’s an RFC-compliant URL, there are plenty of server and client implementations to parse the query params into a useful hash structure. In contrast, if you use URLs like

http://dahl.example.org/#books/order=asc/sort=published/

… you cannot use these implementations, but have to write your own “nice URL” parser instead.

If you must have them for the server side of the equation, that’s fine — Backbone will just ignore them and continue about its business.

If you’re building an app that has accessible documents, traditional URLs and query strings, most likely you need to process the query string on the server and on the client side. For such apps it’s not an option that the server understands them and Backbone ignores them.

My fellow Chaplin author Johannes Emerich pointed out another reason why Backbone should not limit the use of URLs:

In the end the point is that Backbone is said to be an unopinionated framework. But pushing for query params to be encoded as paths is anything but unopinionated or flexible.

There are many reasons why you would want to see those params on the server: Include some JSON data to be processed immediately on client-side app startup; render a full initial static document that contains all the data and only let the client-side app take over from there (for speed/SEO), etc.

In effect, this way of handling params in URLs is saying that Backbone is really only meant for completely client-side apps, and that you have to jump through extra hoops if you are going for a hybrid approach.

Of course, Chaplin and other code could monkey-patch Backbone in order to introduce query string suppport. But since Backbone claims to be “unopinionated”, it should just support traditional URLs instead of making query strings impossible to use. The ultimate decision for or against query strings should be the user’s, not the library’s.

In short, Backbone should support query strings because future-proof JavaScript apps are based on traditional URLs.

Thanks to Johannes Emerich (knuton) for feedback and input.


How we built the data visualization tool GED VIZ


by Mathias Schäfer on July 10th, 2013

Last week we released GED VIZ, a tool to create data visualizations for the web. It’s free to use and also open source! See the announcement for general information.

GED VIZ is a large JavaScript (“HTML5”) application that runs in modern web browsers. It’s made using open web technologies: HTML, CSS, JavaScript and SVG. In this follow-up post we’d like to elaborate on the technical implementation.

continue…


GED VIZ: An HTML5 data visualization tool


by Mathias Schäfer on July 9th, 2013

GED VIZ

Good visualisations are more than just fancy graphics. They are a lot about storytelling, shedding light on important issues, and at the same time inspiring us to raise new questions.

Building such visualisations can be a very time-consuming effort, mostly requiring hand-crafted creative input. Consequently, we’re looking for ways to generate such visualisations without being experts in visual design, which in turn could make data even more accessible.

When the Bertelsmann Foundation, a well-known German non-profit organization and think tank, investigated the relation between the European states during the time of crisis, they found an inspirational visualisation in the New York Times, called “Europe’s web of dept”.

However, this was just a static graphic with no way to interact, add data or watch data change over time. The Bertelsmann Foundation saw a lot potential in building a tool to create interactive visualisation of economic and demographic relations between states and unions. The GED project “intends to contribute to a better understanding of the growing complexity of economic developments”.

After a long ideation and design process with Boris Müller and Raureif, they approached us to build this tool with whatever feasible with state of the art technology. Some time later, we’re very proud to introduce the GED VIZ tool which was finally released on July 2nd.

The GED VIZ editor

GED VIZ is a complex HTML5 application that runs right in the web browser. Using the editor interface, you can create slideshows of interactive charts that visualize economic indicators and relations of countries and their change over time. The slideshows can be embedded into other websites as well. For example, a news site or a blog can embed the visualization into their articles and comments. Users can also share and export the visualization or download the raw data.

On the GED website, there are several articles enriched with interactive visualizations. The following presentation illustrates the story “Shutting Out the BRICs? Why the EU Focuses on a Transatlantic Free Trade Area” by Justine Doody.

To get started with the tool, you can watch the tutorial video on YouTube:

Under the hood, GED VIZ is made with open web technologies. It is a large-scale client-side JavaScript application using our Chaplin.js architecture. On the server side, there is a Ruby on Rails application crunching the data which is stored in a MySQL database. We’ve written another detailed post on the technical implementation.

GED VIZ is a free online tool you can use without prior registration. It is also an open source project. The full code was released under the MIT license and is available on GitHub. We invite everyone to study the code and advance the tool, for example by adding new data sources and new abilities to tell stories.

GED VIZ is our latest take on data visualization using state-of-the-art web technologies. We hope that GED VIZ will be used to create impressive and insightful presentations. Many thanks to the GED team at Bertelsmann for letting us create such an application and release it as an open source project. Also thanks to the designers, testers and prototype developers involved!

Try out GED VIZ at viz.ged-project.com


Free PSD: HTC one, iPhone & iMac


by Eray Basar on May 7th, 2013

free HTC, iMac, iPhone vector PSD

We were recently asked to create a series of devices for our fantastic client cliqz (app store). They were looking for a great way to show that they are multi platform, namely iOS, Web and Android. So we created a nice visual to better showcase their product on an iPhone, an HTC one and a gorgeous iMac. You can see a glimpse of the result on top :) The finished piece will be on their website soon.

Now, here comes the great part: Thanks to cliqz, you can download the full PSD featuring all three devices here!

This is for educational purposes, we want you to dig through the psd and check out the layer styles so you can learn how to do this yourself. You are also allowed to use all three for personal or commercial use. Credit is not necessary but well appreciated! Please don’t try to make a profit by selling this psd or an altered version of it, we want this to be free. We’d also appreciate you not rehosting this file elsewhere, but rather link to this article if you want to share it with your friends and colleagues!


Русский перевод

JavaScript application development is a hot topic and people are wondering which framework they should pick. In this post I’m going to compare two of them.

Marionette and Chaplin are frameworks on top of the popular Backbone.js library. Both seek to ease the development of single-page JavaScript applications. In such applications, the client performs tasks that were typically performed on the server, like rendering raw data into HTML.

Backbone is designed as a minimalist library instead of a full-featured framework. My experience has shown that Backbone can only provide the foundation of a JavaScript application architecture. Both Marionette and Chaplin arose because Backbone is providing too little structure for real-world apps. They respond to the same problems. So there are a lot of similarities between the two – perhaps more than differences.

First of all I have to disclose that I’m a co-author of Chaplin. But I’ve also worked with Marionette in production and I’m following Marionette’s development. There is another ambitious framework on top of Backbone, named Thorax. Since I haven’t worked with it in production I don’t feel entitled to include Thorax in this comparison.

Contents

  1. Non-technical aspects
  2. Common features that fill Backbone’s gaps
  3. Key features of Marionette
  4. Downsides of Marionette
  5. Key features of Chaplin
  6. Downsides of Chaplin
  7. Conclusion

Non-technical aspects

I’m going to talk about the technical details soon, but let’s face it, decisions between software libraries are largely influenced by their perceived momentum, reputation, success stories and documentation.

Marionette and Chaplin are MIT-licensed open-source projects that are being actively developed on Github. The authors have developed several bigger Backbone apps and took their experience to write layers on top of Backbone so you don’t have to repeat their mistakes again.

Well-known companies are using Marionette and Chaplin to develop their products. It’s hard to estimate, but the user base is probably about the same size. The Marionette ecosystem is broader, so a lot of people use parts of Marionette without using the whole library.

Chaplin was more popular in the beginning, but Marionette has recently gained popularity. Marionette is beginner-friendly and has a great documentation, which is probably the most important reason for people to choose it. I think the commitment of Derick Bailey, the creator of Marionette, is one of the reasons for Marionette’s success. He wrote numerous key articles about developing Backbone apps. He is giving talks and recording screencasts, too.

Common features that fill Backbone’s gaps

Event-based architectures without the mess

Backbone’s key feature is the separation of concerns between models and views. They are connected using events and event listening. Using Backbone.Events, you can build an event-driven architecture. This is a great way to decouple the parts of your application.

Both Marionette and Chaplin identify the major pain points of Backbone apps. In an event-based architecture, cleaning up listeners is crucial. Components in your application need to have a defined lifecycle: A particular component creates another and is responsible for its later disposal. Marionette and Chaplin both address this problem with different approaches. They not only advocate event-based communication using Publish/Subscribe and related patterns, but also provide good means to avoid its pitfalls.

Application architecture

Models and views are low-level patterns. On top of that, Backbone only provides Routers. This is a very thin layer and probably the most confusing and problematic part of Backbone. With Backbone.Router alone, it’s not possible to set up a proper top-level architecture that controls the lifecycle of your objects. Both Marionette and Chaplin re-introduce controllers and a managing layer on top of them.

Strong view conventions

Following Backbone’s philosophy of simplicity, Backbone views and view rendering are rather abstract patterns. A Backbone view holds and controls a specific DOM element, but Backbone leaves it up to you how to fill this element and how to add it to the live DOM – the render method of views is empty per default.

Marionette and Chaplin provide view classes with a sane default rendering mechanism (see Marionette.ItemView and Chaplin.View). You just need to choose a template language like Mustache/Hogan, Handlebars of HAML Coffee.

Both libraries have conventions on when to render views and how to add them to the DOM. You can transform the model data before it is passed to the template. This is useful for computed properties, for example.

Views are probably the most complex part of your application, so Marionette and Chaplin provide several helpers and shortcuts. They allow to nest views in a safe way and declare named regions. They allow to register model events in a declarative way, which is easier and more readable than calling this.listenTo(this.model, …); several times.

If you’re using plain Backbone you will definitely miss the view classes for rendering collections (see Marionette.CompositeView and Chaplin.CollectionView). Using item views and two templates – a container template and an item template –, complex interactive lists can be implemented with clean and well-structured code. These collection views listen for collection events and render only those models that have been added, removed or changed their position.

Key features of Marionette

Marionette is a treasure trove for useful patterns to structure your app. It’s quite modular, you don’t need to use all what Marionette provides. It’s easy to start with some features of Marionette and discover others later. Some of Marionette’s features come from separate Backbone plugins, namely Backbone.BabySitter and Backbone.Wreqr. They are part of the Marionette family.

Marionette has some great unique features. In my opinion, the strongest points are application modules and the smart view management.

Application modules

Application modules are independent parts of your app that may consist of routers, controllers, models and views. Modules can be started and stopped, and you can define initializers as well as finalizers. Modules can also be lazy-loaded when a route matches, they don’t need to be active right from the beginning.

BBCloneMail is an example app that consists of two modules (mail and contacts). In this example, only one module is active at the same time. In general, app modules don’t have to be exclusionary. The modules have associated routers that need to be active since the beginning (contacts router, mail router).

Modules can be nested. Your main application, Marionette.Application, is also module. Technically there are some differences between Marionette.Application and Marionette.Module, but I hope in the future they will get more similar.

You probably don’t need several modules right from the beginning, but it’s a powerful feature that helps to break up an app into smaller, coherent units.

View management

Another strong part of Marionette is its sophisticated view management. Views can be nested easily and safely using the aforementioned BabySitter. In addition, Marionette introduces abstractions called Layouts and Regions. A Layout is a view that holds several named Regions. So what is a Region? It’s an object that manages an element in the DOM where it can insert a view. Example regions are header, navigation, main, sidebar and footer.

How and where should I render views and append them to the DOM? Regions are the answer. Instead of messing with DOM element references directly, you declare a Region once and later just say mainRegion.show(view), for example. This renders the view and attaches it to the DOM element that corresponds to the region. A Region holds only one view at a given time, so the old view is “closed” (i.e. removed from DOM and disposed safely).

With nested regions, building a complex UI gets easier and the code gets more readable and maintainable.

Downsides of Marionette

For brevity, I have just mentioned two unique points of Marionette. Most of its features are mature and well implemented. What I don’t like are thin abstraction layers and unclear best practices.

Routing and controllers

For example, Marionette provides little on top of Backbone.Router. In my opinion, this in important because Backbone.Router provides no convention on how to dispose the objects created (typically models and views) when another route gets active. It’s possible to implement a central cleanup using route events, but that’s a hack.

In Marionette there are application modules that can be stopped and Regions than can be closed. But as far as I can see, you’re not supposed to start and stop modules over and over and close regions explicitly.

Marionette.AppRouter is a step in the right direction. The idea is to separate the route configuration from the actual handler code. An AppRouter delegates all route matches to a separate Controller instance.

Controllers in Marionette don’t have a single fixed purpose, they just control something. They can listen to events using the Backbone.Events mixin, they have initialize and close methods. This is definitely useful, but it’s up to you whether you use them and how. Typically, this is the place where you create models and views.

Global vs. private objects

In Marionette, the modules and classes are saved in a global hierarchical namespace, for example BBCloneMail.MailApp.Controller. The actual instances don’t have to be global, but it’s tempting to do so. In the BBCloneMail example, some objects are passed and returned while others are global (e.g. BBCloneMail.MailApp.controller).

From reading the code it’s unclear which objects are global and which are actually accessed globally. When using Marionette, I suggest to implement an object-capability model that defines ways to connect objects without using the global scope.

Templating defaults

Per default, views read their templates from the DOM and compile them using the Underscore template engine (_.template). That’s easy to start with, but it’s not a good practice to embed the template code in your HTML. Eventually, templates should be separated files that can be precompiled and lazy-loaded. Of course, you can change Marionette’s default behavior easily: The Renderer singleton is in charge.

Key features of Chaplin

Compared to Marionette, Chaplin acts more like a framework. It’s more opinionated and has stronger conventions in several areas. It took ideas from server-side MVC frameworks like Ruby on Rails which follow the convention over configuration principle. The goal of Chaplin is to provide well-proven guidelines and a convenient developing environment.

CoffeeScript and OOP

Chaplin is written in CoffeeScript, a meta-language that compiles to JavaScript. However, Chaplin applications do not have to be written in CoffeeScript. In the end, Chaplin is just another JavaScript library.

Using CoffeeScript is part of Chaplin’s idea to make application development easier and more robust. CoffeeScript enforces guidelines from Douglas Crockford’s book “JavaScript – The Good Parts”. Like Marionette, Chaplin is advocating the ECMAScript 5 Strict Mode.

With CoffeeScript, class declarations and class-based inheritance are more compact compared to Backbone’s extend feature. While Marionette tries to get around super calls, Chaplin embraces method overriding and tries to make class-based inheritance work smoothly. For example, if you declare event handlers on a derived class and on its super class, both will be applied.

Modularization using CommonJS or AMD

Chaplin requires you to structure your JavaScript code in CommonJS or AMD modules. Every module needs to declare its dependencies and might export a value, for example a constructor function or a single object. In Chaplin, one file typically contains one class and defines one module.

By splitting up your code into reusable modules and declaring dependencies in a machine-readable way, code can be loaded and packaged automatically.

Using AMD isn’t easy, you need to get familiar with loaders like Require.js and optimizers like r.js. As an alternative, you can use the CommonJS module format and Brunch as a processor.

Marionette also supports AMD. You can structure Marionette apps using AMD modules if you like, but it’s not a requirement.

Fixed application structure

Chaplin provides a core application structure that is quite fixed. It handles the main flow in your app.

  • The Application is the root class that starts the following parts
  • The Router replaces Backbone.Router
  • The Dispatcher starts and stops controllers when a route matches
  • The Layout is the top-level view that observes clicks on links

In Chaplin, there is a central place where to define all routes. A route points to a controller action. For example, the URL pattern /cars/:id points to cars#show, that is the show method of the CarsController.

A controller is the place where you create models. It’s also responsible for creating the view for the main content area. So a controller usually represents one screen of your app interface.

Object disposal and controlled sharing

The main idea of Chaplin are disposable controllers. The basic rule is: The current controller and all its children (models, collections, views) are disposed when another route matches. Even if the route points to another action of the same controller, the controller instance is disposed and a new one is created.

Throwing objects away when another route matches is a simple and effective rule for cleaning up references. Of course, some objects need to remain in memory in order to reuse them later. The Chaplin.Composer allows you to share models and views in a controlled way. You need to mark them as reusable explicitly. If the saved object is not reused in the next controller action, it is automatically disposed.

In a Chaplin app, every object should be disposable. All Chaplin classes have a dispose method that will render the object unusable and cut all ties.

Private instances and Publish/Subscribe

A well-known rule of JavaScript programming is to avoid global variables. Chaplin tries to enforce this best practice. Classes are CommonJS/AMD modules that are hidden in a closure scope. All instances should be private. Two instances should not have references to each other unless they are closely related, like a view and its model.

Objects may communicate in a decoupled way using the Publish/Subscribe pattern. For this purpose the Chaplin.Mediator exists. The mediator can also be used to share selected instances globally, like the user object. After creating the necessary properties, the mediator object is sealed so it doesn’t become the kitchen sink of your app.

View management

Chaplin is also strong at view management. It has app-wide named regions and subview managing. Chaplin takes a different approach on rendering views and attaching them to the DOM. Views may have an autoRender flag and a container option. With these enabled, views are rendered on creation and are automatically attached to the DOM. Instead of container, you can specify region in order to attach the view to a previously registered region.

Apart from the app-wide regions there are no abstraction classes like Marionette.Layout and Marionette.Region. In a Marionette app, you typically create several nested Layouts and Regions. In a Chaplin app, you have fewer key regions and directly nest views inside of them. Of course you can create reusable views that behave like a Marionette.Layout, for example a ThreePaneView.

Downsides of Chaplin

As one of the main authors of Chaplin, I may be biased. But I do see weaknesses and room for improvement. It’s obvious that Marionette found better solutions to specific problems.

As I pointed out, Chaplin defines each component’s lifecycle and therefore is strong in memory management. When developing Backbone applications, this was one of our major problems. Chaplin found a solution that works well, but it isn’t perfect and it’s surely debatable. This feature already evolved and needs to evolve even further.

For beginners, it’s not easy to grasp the whole Chaplin picture. Memory management, modularization and other Chaplin concepts are still new to many JavaScript developers. While Chaplin’s rigidity seems to be a burden in the beginning, an app will benefit from it in the long term.

Publish/Subscribe isn’t a unique feature of Chaplin but can be compared to Marionette’s application vent. In fact Marionette is more flexible because every application module comes with its own Event Aggregator.

Chaplin is using Publish/Subscribe to broadcast events, but also to trigger commands with callbacks. This is rather a misuse of the pattern. Backbone.Wreqr implements the Command and Request/Response patterns for this purpose. Chaplin should learn from Marionette in this regard.

Conclusion

Marionette is rather modular, you can pick the patterns you like. (In my opinion, you should pick most of them because they can improve your app.) Instead of having one central structure, you can create a composite architecture with independent application modules. This offers great flexibility and allows decoupling, but you need to figure out how to use these building blocks properly.

Chaplin is more like a framework, it’s centralized and rather strict. The Chaplin authors think these guidelines offer convenience and boost productivity. Your mileage may vary, of course.

Because of its goals, Chaplin has a broader scope and deals with several problems that other libraries do not address. For example, Chaplin has a feature-rich routing and dispatching system that replaces Backbone.Router but makes use of Backbone.History.

Compared with Marionette, Chaplin is rather monolithic. That doesn’t mean you can’t do things differently. You can configure, modify or exchange the core classes and break all rules.

Standing on the shoulders of giants

So which library should you pick? I don’t think it’s an exclusive choice. Obviously, you should build upon the library whose core concepts meet your demands. But you should examine both to understand and apply their patterns.

When using Backbone, you need to set up a scalable architecture yourself. Do not write applications in plain Backbone and make the same mistakes others did, but put yourself on the shoulders of giants. Have a deeper look at Marionette, Thorax, Aura, Chaplin and other architectures to learn from them.

To get started with Chaplin, I recommend to use one of the boilerplates:
The CoffeeScript boilerplate with Handlebars templates or the same in plain JavaScript. These incorporate several conventions we consider useful: Folder structure and file naming conventions, coding style, template engines. These are part of “the Chaplin experience”.

If you’re looking for a mature quick-start developing environment, you may give Brunch with Chaplin or Chaplin’s Ruby on Rails boilerplate a try.

For a more hands-on introduction to Marionette, see this article on Smashing Magazine: part one and part two. In the Marionette Wiki, there’s a whole list of articles, screencasts and presentations.

Credits

Thanks to Derick Bailey, Sebastian Deutsch, Paul Miller and Paul Wittmann for their feedback on this article and their contributions to both Marionette and Chaplin.


Customizing Core Data Migrations


by Christopher Gretzki on January 22nd, 2013

If you are using Core Data, need to change your database scheme but Core Data cannot infer the changes on its own. And you don’t want to dig into the Core Data Programming Guide, you have come to the right place.

continue…