Developing a custom watch face for Android Wear


by Sebastian Deutsch on July 9th, 2014

9elements-one-blog

The brand new Samsung Gear Live has arrived and we took the time to play with it and more important to develop something for it. For a head start into Android Wear development you should check out the official docs and this Video from the latest Google IO. When we take a look at the watch faces we found them hideous and not very digital. So we sat down and implemented our own watch face – unfortunately the official docs are covering that topic. But digging around we found this Reddit thread which explained that it’s not too difficult to modify you AndroidManifest.xml so that your activity can be used as a watch face. The code is freely available on Github and if you have questions you can ping us on Twitter.

You can download the watch face for free in the Google Play Store.

 

 


Our First Experience with Swift


by Manuel Binna on June 17th, 2014

hackathon-june-2014

Every other month or so we like to do a small Hackathon at 9elements. Last week, after months of hard client work, we finally had the chance to have one again. During the two-day long event, several teams gather to explore new technologies, techniques, tools and build something for fun.  After Apple announced their new programming language at WWDC 2014 earlier this June, we were excited to explore it at our next Hackathon.

DOAR

At our last Hackathon, we created a small app called DOAR, a door opener based on Arduino. DOAR is connected to our LAN and provides a simple API to open the door to our office. We’ve now extended DOAR so that it broadcasts the door ring to connected clients. Clients establish a WebSocket to DOAR. When the door rings, DOAR broadcasts a message to all connected clients. Clients use their WebSocket to send “open door” commands to DOAR.

At this Hackathon, we created an OS X application with Xcode 6 (Beta) and Swift. The application opens a WebSocket to DOAR. When it receives a door bell message through the socket, it shows a notification in the Notification Center. You can directly interact with the notification to open the application and issue the “open door” command. DOAR immediately sends a “did open door” command to connected clients when it receives the first “open door” command. This allows the OS X application to remove the notification from Notification Center so that it gets out of the user’s way.

DOAR notification on OS X

Swift

The DOAR client application for OS X was implemented entirely with Swift (and Interface Builder). The following paragraphs give an overview of things that we had not known or read/heard about before.

Access Control

Swift currently has no support for access control modifiers like @public, @private, and @protected in Objective-C. Apple will deliver support for access modifiers in the final release of Xcode 6 in fall 2014 [1].

Selectors

When registering for notifications with a particular name via NSNotificationCenter, you typically provide a selector that gets called when some other part of the application posts a notification with that name. An Objective-C selector in Swift is a String.

NSNotificationCenter.defaultCenter()
                    .addObserver(
                        self, 
                        selector: "connectionDidReceiveDoorRing:",
                        name: ConnectionDidReceiveDoorRingNotification, 
                        object: connection)

When initially writing the previous code, we forgot to include the colon in the selector. This caused a crash at the call site (when another part of the code posts that notification).

NSNotificationCenter.defaultCenter()
                    .postNotificationName(
                        ConnectionDidReceiveDoorRingNotification, 
                        object: self)

This makes sense, since the implementation of NSNotificationCenter synchronously invokes observers of that notification. However, the point at which the debugger stops in Xcode (at the exception breakpoint) and the debug output do not really indicate that the issue is a nonexistent selector in the observer.

Strings

In Swift string parameters are passed by value. If a call site provides a String as the parameter to a function, Swift copies the complete String into the parameter. This is in stark contrast to Objective-C where NSString is passed by reference.

NSTimer

Usually, all Objective-C classes either directly or indirectly inherit from NSObject. But Swift classes may not have a base class at all. It is perfectly fine to implement a Swift class that has no superclass.

When using NSTimer in Swift code the target must inherit from NSObject, otherwise the timer is not able to invoke the selector on the object.

Conclusion

We think that Swift is a well-engineered programming language with a big potential. It needs time to master a programming langue, but since we started coding in Swift we feel delighted by the simple and elegant code it allows us to write compared to Objective-C.

We are very excited about Swift and look forward to the final release this fall. In fact, we are excited enough that we created Swift Weekly, a weekly newsletter with the most interesting links to blog articles, code, and other stuff about Swift. Subscribe at swiftweekly.com or follow @swift_weekly on Twitter.


External bundles with browserify and gulp


by Sebastian Deutsch on June 4th, 2014

browserify-gulp

Browserify is a nifty little tool that was originally invented to let Node.js modules run in your browser. A nice side effect of this is that you can use browserify to split up your application’s JavaScript into a well organized modules and then smash them together with a proper dependency management. In the past we’ve used Require.js to do that job, but for us it’s too painful and error-prone when creating bundles for production environments. Require.js also doesn’t play very nicely with Rails and it’s quite difficult to get everything working. Esa-Matti Suuronen has written a nice post comparing Require.js and browserify in depth.

Basic Usage

For those who aren’t familiar with browserify here is an example how we’re using it:

$ = require 'jquery'
QuestionView = require './question_view'
module.exports = ->
  intializeQuestions = ->
    ...
  initializeProfessions = ->
    ...

  questions = intializeQuestions()
  professions = initializeProfessions()
  questionView = new QuestionView(questions, professions)
  return

In the first line we’re requiring jQuery using the CommonJS require syntax. In the second line we’re requiring one of our views. We’re also exporting a function via module.exports for further usage in our application.

Gulp Integration

For Node.js projects we’re using Gulp for our build chain. The browserify integration works like charm:

gulp.task 'browserify', ->
  browserifyOptions =
  transform: ['coffeeify']

  gulp.src("#{BASES.src}/javascripts/application.coffee", { read: false })
    .pipe(browserify(browserifyOptions))
    .on('error', gutil.log)
    .on('error', gutil.beep)
    .pipe(rename("application.js"))
    .pipe(gulp.dest("#{BASES.build}/javascripts"))
    .pipe(refresh(lrserver))

Browserify takes the application.coffee, processes it by requiring all the dependencies and then it spits out the bundled application.js that can be used in your HTML. A pretty straightforward workflow – but it has a flaw: When your application grows and your dependencies sum up this gulp task may take a while. While doing our latest project the execution time of browserify was up to 12 seconds.

External bundles

External bundles is a mechanism of browserify that lets the user require dependencies that are not directly processed by the actual build step. We’ve used it to create two bundles: The first bundle contains all vendor JavaScript code like jQuery, D3 and plenty of other stuff. The second bundle contains all our app related JavaScript. First you should list all your dependencies in a separate array:

EXTERNALS = [
  { require: "lodash", expose: 'underscore' }
  { require: "jquery", expose: 'jquery' }
  { require: "es5-shim" }
  { require: "rsvp", expose: 'rsvp' }
  { require: "../../#{VENDOR_DIR}backbone-1.1.2", expose: 'backbone' }
  { require: "../../#{VENDOR_DIR}d3-3.4.3", expose: 'd3' }
  { require: "../../#{VENDOR_DIR}jquery.nouislider-5.0.0", expose: 'jquery.nouislider' }
  { require: "../../#{VENDOR_DIR}topojson-1.4.9", expose: 'topojson' }
  { require: "../../#{VENDOR_DIR}matchMedia-0.2.0.js", expose: 'matchmedia' }
]

Now you create two gulp tasks ‘browserify:vendor’ and ‘browserify:application’:

gulp.task 'browserify:vendor', ->
  gulp.src("#{BASES.src}/scripts/vendor.js", { read: false })
  .pipe(browserify({
      debug: false
  }))
  .on('prebundle', (bundle) ->
    EXTERNALS.forEach (external) ->
      if external.expose?
        bundle.require external.require, expose: external.expose
      else
        bundle.require external.require
  )
  .pipe(rename('vendor.js'))
  .pipe(gulp.dest("#{BASES.build}/scripts"))

and

gulp.task 'browserify:application', ->
  browserifyOptions =
    transform: ['coffeeify']

  prebundle = (bundle) ->
    EXTERNALS.forEach (external) ->
      if external.expose?
        bundle.external external.require, expose: external.expose
      else
        bundle.external external.require

  application = gulp.src("#{BASES.src}/scripts/application.coffee", { read: false })
    .pipe(browserify(browserifyOptions))
    .on('prebundle', prebundle)
    .on('error', gutil.log)
    .on('error', gutil.beep)

  vendor = gulp.src("#{BASES.build}/scripts/vendor.js")

  es.concat(vendor, application)
    .pipe(concat('application.js'))
    .pipe(gulp.dest("#{BASES.build}/scripts"))
    .pipe(refresh(lrserver))

The magic happens in the ‘prebundle’ event that gulp-browserify provides. In the first gulp task all dependencies are required and in the second task they’re declared as external. In a last step we’re using gulp to tie both bundles together to a single javascript file. If you make changes to your app then only browserify:application is called – which takes just a fraction of the time comparing to the original single task.

Enter watchify

If you’re still not happy with the speed of JavaScript preprocessing you can replace browserify with watchify. Watchify is also a tool from substack but instead of compiling all the resources from the scratch it keeps a cached copy of all the source files and does incremental builds if something changes:

requireExternals = (bundler, externals) ->
  for external in externals
    if external.expose?
      bundler.require external.require, expose: external.expose
    else
      bundler.require external.require

gulp.task 'watchify', ->
  console.log 'watchify'
  entry = "#{BASES.src}/scripts/application.coffee"
  output = 'application.js'
  bundler = watchify entry
  bundler.transform coffeeify
  requireExternals bundler, EXTERNALS

  rebundle = ->
    console.log "rebundle"
    stream = bundler.bundle()
    stream.on 'error', notify.onError({ onError: true })
      .pipe(source(output))
      .pipe(gulp.dest(SCRIPTS_BUILD_DIR))
      .pipe(refresh(lrserver))
    stream

  bundler.on 'update', rebundle
  rebundle()

With the depicted workflow we were able to speed up our development builds by a huge factor. If you want to learn more about browserify and its internals be sure to check out the browserify handbook. If you like what you’re reading you should follow @9elements on Twitter.

 


Visual storytelling using WebGL


by Daniel Hoelzgen on April 22nd, 2014

visual storytelling using webgl

Recently we were working on a redesign of the uformit website. Uformit, an online marketplace for personalized 3D design, was already presented at the 3D Printshow in London and New York, but never announced to the public. It features a WebGL powered product display, allowing the user to directly form the product by adjusting its parameters and see the result in real-time.

However, during the show, it was easily possible to explain in person how it worked, what’s the story behind it and what is so special about the technology that allows designers to create products that can be personalized to be a truly unique piece. We first thought about creating a video that would explain the whole process, but after a few discussions we quickly came up with the simple, yet obvious idea: If we have the product as a 3D object on the website, why not use exactly this to explain the story around it?

Our designer Kevin Kalde came up with a design idea we instantly fell in love with: He combined traditional visual storytelling techniques with the use of a WebGL rendered object that moves from page to page, allowing us to explain the different steps and aspects in the creation of a design on uformit without losing focus. And after developing a first prototype, we all agreed that it really felt right. Unfortunately, a few issues (dragons) came up…

 

Firefox: Works great (until it freezes)

Even without using background workers for loading the model, Firefox was the only browser which seemed to handle the model-loading without any jittering. Unfortunately, after some time we stumbled upon a huge issue: Under certain circumstances, it seems the browser (v28) is likely to freeze when the WebGL canvas is hidden and then re-appears, either by hiding it by code, or by scrolling past the element and scrolling back.

After checking out the current nightly build, we noticed that this bug seems to be fixed in version 29, which is scheduled to be released on April 29th. This way, we just decided to live with this problem for a few days and to disable features that cause the freeze where possible. Perhaps there would have been a way to mitigate the problem by doing some tricks, but sacrificing code quality and stability to mitigate an already fixed bug (and additionally wasting even more resources on the problem than we already did), we decided to just leave it for now.

Update: Unfortunately, the bug still exists in the production version of Firefox, so it seems we are forced to find a way around this. Since the debug version seems to have no problems with the page, this might be a lot of trial and error…

 

IE: WebGL light

Internet Explorer up to version 10 is not able to display WebGL content at all. Although version 11 supports WebGL, there are a few things you cannot use, explained by Microsoft in this post. They show workarounds for some of the problems, but keep in mind that some libraries like three.js are not willing to adjust code due to missing features (which is basically a good idea), so you might have to keep an eye on this by yourself. However, Microsoft is working on getting everything in place.

 

Safari: So optimized your animation does not work

Looking at the numbers, it seems Apple did a great job in tweaking Safari’s performance. Trying to actually use this performance for implementing animations, you slowly get the impression they don’t do it faster, they just do less. Of course, it is well known that some methods are well suited when it comes to web animations and others aren’t, but in case you have to do calculations based on scrolling progress and position of other elements, you rely on having these updated very often as well. Even when using animation frames and getting acceptable high frame rates, the animation we implemented seemed to run not very smoothly, especially when moving synchronously with other elements on the page while scrolling. After investigating a little further, we recognized that the values we used for doing position calculations were just not updated very often, even though the page itself scrolls smoothly.

Additionally, Safari seems to sometimes have a buffer problem when changing the size of the WebGL context, causing a strange flickering effect during size-related animations, and causing real problems for things like a zoom function, causing the whole screen to flicker, looking like a bad designed computer malfunction in an old science fiction movie.

Of course, at least for position calculations, there are ways around this, like handling the scrolling by oneself, so at least everything moves synchronously slow, and of course this is not directly related to the use of WebGL, although having the WebGL container on the page does not exactly increase performance, so you run earlier into these issues. However, we decided to completely disable WebGL in Safari for this specific page, having a fallback page which is less interesting, but at least moves as fast and smooth as intended without being forced to do crazy tricks. We really hope that Apple works on this for the next updates of Safari – It’s correct that a browser should not necessarily try to be optimized for a certain type of pages, but forcing the developers in a specific direction by just not giving them the tools to build all kinds of webpages at the same quality cannot be the correct way, either.

 

Chrome: Thank you, Google

First of all, we used this browser during development, so perhaps there are things that work in other browsers and not in Chrome. However, there were no bad surprises – nothing behaving slow, no workarounds needed. It did not flicker, and did not display strange things. So: Thank you, Google!

 

WebGL – it’s almost there!

Despite the few problems we had during development, the feedback we got for this page really makes up for it. People not only liked the design of the page and the fact that the story builds up around a moving 3D object, it also really helped them to understand what uformit is about, and how the process we described on the page worked. So, yes, of course, we would do it again!

Do you have experience with WebGL for this kind of pages? I’m happy to hear from you on this blog or on Twitter.


Stripe vs. Paymill


by Sebastian Deutsch on February 28th, 2014

Stripe vs Paymill

As you might know 9elements is specialized in building digital products.

One of the tasks that comes with almost every product is payments – after all you want to earn some money. When it comes to payments there are plenty of options out there and choosing the right payment provider can be a tough job. For many the hurdle is not only technical but also economical. You might have to deal with credit card clearance contracts directly or you have to do a security audit to ensure that your service is PCI compliant.

In this blog post I want to compare Stripe and its German clone counterpart Paymill. Both services make it dead simple to integrate payments into your project. Both of these services are extremely developer friendly – no business hassle involved, just nice Rest APIs!

A little bit on the background: Stripe is a US based service that was launched in 2011. They marched out to disrupt the current defacto services like PayPal or Google Wallet. Paymill was launched by German clone-incubator Rocket Internet in 2012 and when we initially created Salon.io there was no other option since Stripe wasn’t available in Germany. Usually I’m not comfy using copycats but the guys at Paymill did a pretty good job to create an awesome product. They covered all the important features of Stripe and they added some good UI improvements, too. Stripe is open for many countries and currencies now and it’s time to review our technical choice. We used Stripe in a new project which will be released soon so we know both services – here are our learnings:

Subscriptions

Subscriptions are possible in both payment solutions. But to be honest the usage feels a bit more natural in Stripe. If you create a subscription an invoice object is also generated. An invoice in Stripe is not the same as the e-mail or PDF that goes out to the customer, it’s the data structure that can be used to generate this e-mail or PDF. Every time a payment recurs another invoice object is created and your application is informed via a webhook. Webhooks are URLs that are called if an event in Stripe occurs. A bigger advantage is that a lot of corner cases can be handled automatically. For example if a customer upgrades a plan from Silver to Gold you might want to give the customer a prorata discount for the new subscription. These things can be easily managed using the settings. At Paymill you have to take care of these things by yourself and set them up manually.

Coupons

Coupons are completely missing at Paymill. The options in Stripe are quite versatile since you can choose between redeem dimensions or time dimensions (once, multi month, forever). Since these coupons are API first class citizens you can leverage them as a marketing workhorse and don’t have to worry about all the calculations in your app.

Eco System

Both services are providing solid libraries and gems for every major programming platform like Rails or Node.js. Since Stripe got a bigger momentum a nice eco system has evolved around it. Especially for Rails there are battlefield proved engines that just work. We’ve evaluated Koudoku which is a fully fledged Rails Engine that handles Plans & Subscriptions – but eventually we find a bit too much (it even helped generating the views). Eventually we went with Stripe Event that just handles Stripe’s webhooks but it does what it is supposed to do and it’s quite lightweight. The Paymill ecosystem is not as substantial because for standard Rails projects there is no such a thing like Paymill Events – you have to deal with webhooks yourself. Side-note: There are ready to-use integrations for ecommerce projects that are using shop systems Spree or Magento. We haven’t tested those but they look quite solid.

When to use Stripe

If you need to market your product quickly, I’d definitely advise you to use Stripe since the Eco System and the momentum of innovations it is ahead of the competition.

When to use Paymill

If you’re in a country where Stripe is simply not available, like in Northern Europe or Turkey or if you want to process non credit card payments like the german ELV or the EU SEPA payments, Paymill is an option.

Tips (apply to both)

While developing you might want to test your stuff. Testing Stripe events like recurring payments can be really painful since you have to actually wait until a payment occurs again. Testing the webhooks in a live environment can be a tedious task. Luckily there is a handy service called Ultrahook that helps you to proxy callbacks to your local developing machine. If you have further questions then ping us on Twitter or write us an email.

 


In one of our recent projects, we needed to implement the ability to sign automatically generated configuration profiles for iOS and OS X [1] in the backend. If a configuration profile is signed and iOS / OS X successfully verifies its signature, the profile looks like the following:

Signed Configuration Profile

The user sees immediately that the profile is signed and thus can be trusted, because it was signed by a trusted authority (in this example COMODO). The requirements of the project dictated that the generated profile must be signed with a valid signature. However, when we initially tried to sign it with various approaches the signature verification always failed. After a little research, we found that the intermediate certificates necessary to perform a successful verification were not present in the configuration profile. Dick Visser’s article “Sign Apple mobileconfig files” [2] led us to the right direction. Signing the configuration profile on the command line integrates the intermediate certificates and allows the profile to be verified successfully.

$ openssl smime -sign -signer certificate.pem -inkey private_key.pem -certfile intermediate_certificates.pem -nodetach -outform der -in profile.mobileconfig -out profile_signed.mobileconfig

Our first solution looked like this:

def sign_mobileconfig(mobileconfig)
return `echo "#{mobileconfig}" | openssl smime -sign -signer "#{Rails.root}/config/keys/certificate.pem" -inkey "#{Rails.root}/config/keys/private_key.pem" -nodetach -outform der -certfile "#{Rails.root}/config/keys/intermediate_certificates.pem" -binary`
end

This spawns a separate process to invoke openssl. But we wanted to be able to sign the profiles with the OpenSSL API directly in Ruby, not by invoking an external program. Unfortunately, the documentation of OpenSSL itself and the documentation of the OpenSSL API in Ruby is very poor. Brian Campbell’s answer to the question “Digital signature verification with OpenSSL” on Stack Overflow [3] was the best explanation about how to sign a file with Ruby’s OpenSSL API we could find. However, the answer did not led us to a successful signature, because either the intermediate certificates were not present in the signed configuration profile or the signature creation failed (depending on how we configured the parameters of OpenSSL::PKCS7::sign).

The road to success is to bundle the (PEM-encoded) intermediate certificate and the root certificate in separate files, each certificate in its own file. Before invoking OpenSSL::PKCS7::sign, read all certificates and create OpenSSL::X509::Certificate instances. Create an Array with the certificate instances and provide it to OpenSSL::PKCS7::sign as the fourth parameter. The fifth parameter should be OpenSSL::PKCS7::BINARY. The following listing outlines the solution that fulfills our project’s requirements in a nice and clean manner.

signing_cert_data = … # Read from file.
signing_cert = OpenSSL::X509::Certificate.new(signing_cert_data)

private_key_data = … # Read from file.
private_key = OpenSSL::PKey::RSA.new(private_key_data)

configuration_profile_data = … # Read from file.

intermediate_cert1_data = … # Read from file.
intermediate_cert1 = OpenSSL::X509::Certificate.new(intermediate_cert1_data)
intermediate_cert2_data = … # Read from file.
intermediate_cert2 = OpenSSL::X509::Certificate.new(intermediate_cert2_data)
intermediate_certs = [ intermediate_cert1, intermediate_cert2 ]

signed_file = OpenSSL::PKCS7.sign(signing_cert, private_key, configuration_profile_data, intermediate_certs, OpenSSL::PKCS7::BINARY)

The full source code is available on GitHub [4]. We welcome any comments or suggestions to further improve it.


Webbzeug – procedural texture editor for your browser


by Carsten Przyluczky on October 18th, 2013

One night, shortly after we moved to our new office, Sascha and me where watching some demos on Youtube.You know those 64K executables you run, and think „how the heck they put all that content into 64K“ ?! Well one major idea is, procedural content. That means that the content is calculated, hence we don’t store what, but how. Let me add a simple example here. You want to store a texture that hold a glow pattern. A glow pattern i a circle that fades out to the border. Those can be used for particles, fake light glow and what not. So instead of storing it in to a JPEG we just remember, glow with radius, and fallof. At runtime we then render the glow before the demo starts. I think its clear that the parameters for the glow take a little fraction of the actual texture. Well as those textures or their generation can get very complex, some people code editors for them. And one of those is .Werkkzeug from Farbrausch. I showed that to Sascha, and he was amazed of the simplicity and the power that thing holds within. I told him, that I always wanted todo something like that as web-app. He said „lets roll“ and so we did. Webbzeug is our, still alpha, approach to create an easy and fast procedural texture editor for the web. Sascha coded the beautiful frontend, while I hacked those operations. Our first approach was to use canvas, due simplicity and compatibly. So we started and all was good, until we added blur and lightning operations. That where too slow. We tried some optimizations, but, still too slow. Our second approach for the backend was to use WebGL, with the help of THREE.js, a nice WebGL wrapper. And that is really fast! Realtime actually.

Lets get the party started

Enough talking I hear u say, okay lets roll. I will give u a quick guide and then we will open some sample and see what this tool can do !

The Basic concept you need to understand is called „operation stacking“. We have basic operations that we can combine to create some nice looking textures. So we have to select operations, set parameters and set an running-order. Thats why some people use graphically programming as buzz-word for that kind of gui.

So open  Webbzeug.com in your browser, right click on „generative“ and select „rect“. Now you have your first action glued to the mouse cursor. Drop it somewhere on the grid and hit space. Now you should see something like this

webb1

The little eye icon marks the observed action. Now hold shift, and click the action. Now the parameter dialog will popup. As you can see, for the rect action you may change the x and y position, the size, and the color. To change values u can either enter a value, or click in to the text field, hold left mouse button and move the mouse up and down to change values. Ok, set x and y position to zero. Now add another rect and drop it below the other rect. Hit space. Now you should see two rectangles. Note that the upper rectangle is now input for the lower one. Now comes the cool part. Click on the upper action holding shift, don’t hit space. And play around with the parameters. Now the rectangle, defined by the upper action should change. That means you can change parameters at any point in your script or program and see the changes at a certain point, namely the observed action. Without that feature, it would be very time consuming to create procedural textures.

To make things easier we divided the actions in three categories. Generative, those generate basic shapes like rectangles and circles, and also noise and other patterns. Processive, with these you can apply modifications such as deformation, change colors, or combination of actions. And last but not least Memory. Memory contains load and store. These actions allow you to store the result of a part of your script and reuse it with load at another point.

To get a better idea of what the webbzeug is able to, click on samples, select the hud element. After opening the lower cont/bri action should be selected, so you just need to hit space, and the whole thing should look like this

webb2

Feel free to step through the single actions, hit space and see what they do, and how the workflow can look like.

Conclusion

The current state is alpha. The thing still has some bugs, and the performance isn’t close to what I will be in the final version. But it fulfills it purpose as proof of concept.

So we hope you have fun with the webbzeug. If you do, tweet about it, spread the word, feel free to contribute on github .

Enough reading visit Webbzeug.com


In this article, I will first take a high-level look at modern frontend architectures: In a time where web apps easily surpass 1 MB of JavaScript, what should we try to achieve? Second, based on these considerations, I’m going to argue that Backbone.js should fully support the traditional HTTP URL scheme.

The ideal web site architecture

Today’s typical web site architectures can be placed between two extremes, one being traditional server-side logic, the other being JavaScript-only single-page apps. In between, there are hybrid approaches. Pamela Fox does a great job of describing these architectures and their pros and cons. She also introduces some key requirements from the user’s perspective: Usability, Linkability and Searchability/Shareability. In her presentation, she gives a quick overview of how the architectures perform. This outlines the current situation quite well.

How should a modern site work? There are several reasons why one should combine the best of all approaches: Server-side robustness with a client-side turbo-boost. In practice, we run into problems when trying to share logic between server and client. I think this is an engineering problem that can and will be solved in the future.

So what is the key to future architecture? I think it is Progressive Enhancement from soup to nuts. Progressive Enhancement is still useful and necessary. A typical site should be able to fulfill its basic purpose somehow even without JavaScript. A machine that speaks HTTP and HTML should be able to read a site. Of course, modern web sites aren’t about static content, but about user interactivity. But in most of the cases, there are resources with a static representation, either text, video or audio.

In order to achieve Searchability and also performance, content needs to be rendered on the server to some extent. Twitter learned this lesson the hard way in the “#NewTwitter” days, when they experimented with completely client-side architecture, but ultimately went back to serving traditional HTML pages for each initial request. Still, twitter.com is a huge JavaScript app. JavaScript operates on top of the initial DOM and then takes over to speed up subsequent actions. Hopefully, we’ll see this hybrid approach more and more in the future.

Rendering HTML on the server-side is considered valuable again. That’s because the traditional stack of HTTP, URL and HTML is simple, robust and proven. It can be incredibly fast. It works in every user agent; browsers, robots and proxies are treated uniformly. Users can bookmark, share and save the content easily.

Cool URLs are cool!

Used correctly, URLs are a great thing. Web development is centered around them: Cool URLs don’t change, URLs as UI, RESTful HTTP interfaces, hackability and so on. The concept of HTTP URLs dates back to 1994.

When “Ajax” appeared in 2005, people quickly realized that it’s necessary to reflect the application state in the URL. For a long time, JavaScript apps weren’t able to manipulate the full URL silently without triggering a server request. To achieve Linkability, therefore, many JavaScript apps are using “Hash URLs”. It’s safe to set the fragment part of the URL, so this became common practice. Most JavaScript libraries for single-page apps still rely on Hash URLs. Among others, Backbone.js uses Hash URLs per default in its routing implementation.

Today we know that Hash URLs aren’t the best solution. In 2011 there was a big discussion after Twitter and Google introduced Hash URLs and “Hash Bang URLs” in particular. Most people agreed that this was a bad hack. Fortunately, HTML5 History (history.pushState and the popstate event) makes it possible to manipulate the URL without leaving the single-page app. In general, Hash URLs should only be used as a fallback for older browsers.

If you use pushState, all URLs used on the client need to be recognized by the server as well. If a client sends a request such as GET /some/path HTTP/1.1, the server needs to respond with a page that at least starts the JavaScript app. In the end, making the server aware of the request path is a good thing. Instead of just responding with the code for the JavaScript app as a catch all, the server should better respond with useful content. In this case, traditional URLs enable Searchability and Shareability. Take for example an URL like this:

http://www.google.com/search?hl=en&ie=utf-8&q=pushState

These kinds of URLs are a well-established standard, widely supported, and can be handled on both the server and the client. So my conclusion is: Future JavaScript-heavy web sites may be “single page apps” because there’s only one initial HTML document per visit, but they still use traditional URLs.

Backbone.js and query strings

Backbone.js has a great History module that observes URL changes and allows to set the URL programatically. However, it doesn’t support traditional URLs completely. The query part (?hl=en&ie=utf-8&q=pushState), also known as query string, is ignored when routing. In this second part of the article, I’d like to discuss the ramifications of this missing feature.

Backbone treats /search?q=heaven and /search?q=hell as the same URL. This renders the query string useless. You can “push” URLs with different query strings, but if the user hits the back button, Backbone won’t consider this as a URL change, since it ignores the change in the query string.

Chaplin.js, an opinionated framework on top of Backbone.js, tries to work around this by parsing the query string into a Rails-like params hash. But it ultimately fails to support query strings because of Backbone.History’s limitation. Full disclosure: I’m a co-author of Chaplin.

The lack of query string support in Backbone is deliberate. The maintainer Jeremy Ashkenas decided against it. In several GitHub issues, he provides rationale:

From issue 891:

In the end, I think most Backbone apps should definitely not have query params in their app URLs — they’re a server-side URL convention that doesn’t have much useful place in client-side routing. So we shouldn’t be supporting them by default — but if you want this behavior, it should be easy enough for you to implement

From issue 2126:

Backbone shouldn’t be messing with the search params, as they don’t have a valid semantic meaning from the point of view of a Backbone app. If you want to use them (on a page that has a running backbone app), that’s totally fine …

In the most recent issue, Jeremy points out that this is not a browser compability issue:

From issue 2440:

wookiehangover: The thing that’s problematic about this (and why querystrings are ignored as of 0.9.9) is due to a handful of very weird but very real bugs with querystring processing and character encoding between browsers.

Nope. Not in the slightest ;)

The reason why querystrings are ignored by Backbone is because:

Querystrings only have a defined meaning on the server-side. The browser does not normally parse or otherwise handle them.

While querystrings are fine in the context of real URLs, querystrings are entirely invalid in the context of #fragment URLs. Most Backbone apps deal with fragment urls sooner or later — even if you’re using pushState for most of your users, IE folks will still have fragments. So querystrings can’t be used in a compatible way.

Better to leave them out of your Backbone app, and use nice URLs instead. If you must have them for the server side of the equation, that’s fine — Backbone will just ignore them and continue about its business.

Party like it’s 1994! *

I’d like to answer to these statements here. First of all, it’s great to hear that there are no major browser issues blocking full URL support. Jeremy argues against query strings on another level:

Querystrings only have a defined meaning on the server-side. The browser does not normally parse or otherwise handle them.

Honestly, I don’t understand this point. You can process a query string on the server, but you can do that on the client as well. There are cases where query strings are processed almost exclusively on the client, for example the infamous utm_ parameters for Google Analytics.

A URL is a URL. Wherever a URL appears, its parts have a defined meaning – there are Internet Standards which define the meaning. It doesn’t matter which software generates the URL and which processes it, a query string should have the same meaning.

While querystrings are fine in the context of real URLs, querystrings are entirely invalid in the context of #fragment URLs.

This assumes that Backbone apps use Hash URLs instead of pushState. Well, most of them do and that’s indeed a source of pain. But technically the query string ?foo=bar is entirely valid inside the fragment part of a URL.

A URL like http://dahl.example.org/#search?q=matilda may look weird, but it is completely in line with RFC 3986. With pushState, you don’t have to think about URLs in URLs. You can use URLs like http://dahl.example.org/search?q=matilda. This is the form of URLs that has been around since 1994, for good reasons.

… even if you’re using pushState for most of your users, IE folks will still have fragments. So querystrings can’t be used in a compatible way.

Well, they can be used in a compatible way. It’s technically possible to put path and query string into a fragment. It might violate the semantics of traditional URLs, but syntactically, it’s still a valid URL.

Better to leave them out of your Backbone app, and use nice URLs instead.

Jeremy argues that client-side apps should encode query params inside the path, like

http://dahl.example.org/#books/order=asc/sort=published/

That’s what he calls a “nice URL”. I beg to differ. In the spirit of 1994, why not stick to traditional URLs like:

http://dahl.example.org/books?order=asc&sort=published

I see no reason why JavaScript apps should invent new URL syntaxes. Today’s JavaScript apps are using pushState and properly accessible URLs. They should not and don’t have to differ from the URL conventions that have been used since the beginning of the web.

It’s an RFC-compliant URL, there are plenty of server and client implementations to parse the query params into a useful hash structure. In contrast, if you use URLs like

http://dahl.example.org/#books/order=asc/sort=published/

… you cannot use these implementations, but have to write your own “nice URL” parser instead.

If you must have them for the server side of the equation, that’s fine — Backbone will just ignore them and continue about its business.

If you’re building an app that has accessible documents, traditional URLs and query strings, most likely you need to process the query string on the server and on the client side. For such apps it’s not an option that the server understands them and Backbone ignores them.

My fellow Chaplin author Johannes Emerich pointed out another reason why Backbone should not limit the use of URLs:

In the end the point is that Backbone is said to be an unopinionated framework. But pushing for query params to be encoded as paths is anything but unopinionated or flexible.

There are many reasons why you would want to see those params on the server: Include some JSON data to be processed immediately on client-side app startup; render a full initial static document that contains all the data and only let the client-side app take over from there (for speed/SEO), etc.

In effect, this way of handling params in URLs is saying that Backbone is really only meant for completely client-side apps, and that you have to jump through extra hoops if you are going for a hybrid approach.

Of course, Chaplin and other code could monkey-patch Backbone in order to introduce query string suppport. But since Backbone claims to be “unopinionated”, it should just support traditional URLs instead of making query strings impossible to use. The ultimate decision for or against query strings should be the user’s, not the library’s.

In short, Backbone should support query strings because future-proof JavaScript apps are based on traditional URLs.

Thanks to Johannes Emerich (knuton) for feedback and input.


How we built the data visualization tool GED VIZ


by Mathias Schäfer on July 10th, 2013

Last week we released GED VIZ, a tool to create data visualizations for the web. It’s free to use and also open source! See the announcement for general information.

GED VIZ is a large JavaScript (“HTML5”) application that runs in modern web browsers. It’s made using open web technologies: HTML, CSS, JavaScript and SVG. In this follow-up post we’d like to elaborate on the technical implementation.

continue…


GED VIZ: An HTML5 data visualization tool


by Mathias Schäfer on July 9th, 2013

GED VIZ

Good visualisations are more than just fancy graphics. They are a lot about storytelling, shedding light on important issues, and at the same time inspiring us to raise new questions.

Building such visualisations can be a very time-consuming effort, mostly requiring hand-crafted creative input. Consequently, we’re looking for ways to generate such visualisations without being experts in visual design, which in turn could make data even more accessible.

When the Bertelsmann Foundation, a well-known German non-profit organization and think tank, investigated the relation between the European states during the time of crisis, they found an inspirational visualisation in the New York Times, called “Europe’s web of dept”.

However, this was just a static graphic with no way to interact, add data or watch data change over time. The Bertelsmann Foundation saw a lot potential in building a tool to create interactive visualisation of economic and demographic relations between states and unions. The GED project “intends to contribute to a better understanding of the growing complexity of economic developments”.

After a long ideation and design process with Boris Müller and Raureif, they approached us to build this tool with whatever feasible with state of the art technology. Some time later, we’re very proud to introduce the GED VIZ tool which was finally released on July 2nd.

The GED VIZ editor

GED VIZ is a complex HTML5 application that runs right in the web browser. Using the editor interface, you can create slideshows of interactive charts that visualize economic indicators and relations of countries and their change over time. The slideshows can be embedded into other websites as well. For example, a news site or a blog can embed the visualization into their articles and comments. Users can also share and export the visualization or download the raw data.

On the GED website, there are several articles enriched with interactive visualizations. The following presentation illustrates the story “Shutting Out the BRICs? Why the EU Focuses on a Transatlantic Free Trade Area” by Justine Doody.

To get started with the tool, you can watch the tutorial video on YouTube:

Under the hood, GED VIZ is made with open web technologies. It is a large-scale client-side JavaScript application using our Chaplin.js architecture. On the server side, there is a Ruby on Rails application crunching the data which is stored in a MySQL database. We’ve written another detailed post on the technical implementation.

GED VIZ is a free online tool you can use without prior registration. It is also an open source project. The full code was released under the MIT license and is available on GitHub. We invite everyone to study the code and advance the tool, for example by adding new data sources and new abilities to tell stories.

GED VIZ is our latest take on data visualization using state-of-the-art web technologies. We hope that GED VIZ will be used to create impressive and insightful presentations. Many thanks to the GED team at Bertelsmann for letting us create such an application and release it as an open source project. Also thanks to the designers, testers and prototype developers involved!

Try out GED VIZ at viz.ged-project.com