Author Archive

Life After WIMP

July 23rd, 2015 by Sam

Since Xerox’s PARC and Apple’s Lisa user interfaces hit the scene, the WIMP (windows-icons-menus-pointers) interaction model has become a universal standard. This past Hackweek, fellow designer and Chartcorps member Sam Chieng, front-end developer Aliya Robinson, and I decided to dive into UI’s history and potential future.

Here’s where our week’s investigation took us:


I was particularly intrigued by how we’ve arrived at the certain interfaces that we’re so accustomed to today. How did we decide on specific UI elements as “the best”? How does this innately connect to how humans learn and perceive information? Are windows and pointers the best representation of how we interact with a system?

As I began researching, I came across a few papers published in the early ‘90s by Jakob Nielsen on Human-Computer Interaction. Nielsen analyzed the dramatic change in interaction when computers transitioned from the command line to a modern GUI. The latter allowed for object-oriented structuring, or direct manipulation, which was based on childlike primitives of hand-eye coordination—for example: dragging a file icon into your trash icon, as opposed to typing ‘rm foo’ in a command line. In the end, this switch made new user adaptation significantly easier.

Direct Manipulation is a concept with which we’ve become so familiar today (particularly in WIMP models), but Nielsen explains that there are many interaction levels involved in a single task.

Screen Shot 2015-07-23 at 10.06.04 AM

The graph above illustrates the multiple layers that can be applied to something as simple as deleting a few sentences in a text editor. It begs the question: can we make this interactions even more “direct”?

This is where newer interactions come in. Touch screens and wearables have made interacting with technology more intuitive and user-centric. The focus is now on controlling the task, not the computer, to get to your goal.

Another paper in 2000 by Michel Beaudouin-Lafon, a professor of Computer Science at University of Paris-Sud, introduced methods to compare the effectiveness of different interactions with direct manipulation. Evaluating how classic WIMP elements (Menus and Scollbars) compare with newer interaction techniques (Graspable Interfaces), Beaudouin-Lafon argued that the latter had better Indirection, Integration, and Compatibility—3 properties necessary to evaluate efficiency.

Most remarkable of all was how accurately Nielsen and Beaudouin-Lafon had predicted future interactions and interfaces we now use today; all their papers were published before the iPhone was introduced in 2007! I’m excited to see the new possibilities and developments we’re working on today in the discipline of Human-Computer Interaction, and how this extends to better post-WIMP design.


My goal for this project was to learn about the Leap Motion controller and how it works. If you don’t know, the Leap Motion controller is a device that responds to hand motion, manipulating objects on your browser, visualizer, or device. Think of it as a Wii without a remote… and in 3D! As I dug deeper into Leap Motion code, I found that there are many plausible avenues for utilizing the technology. Ultimately, I found that the most accessible route, IMO, is to take DOM elements and animate the CSS transform properties, after creating a 3D object with CSS3 transform properties.

For example, to create a cube you can simply use the following HTML/CSS code.

Screen Shot 2015-07-23 at 10.11.26 AM

Or in HTML:

Screen Shot 2015-07-23 at 10.12.56 AM

Which gets you: Screen Shot 2015-07-23 at 10.14.08 AM

Then to manipulate the object with Leap Motion, you will use the following code snippet.

Screen Shot 2015-07-23 at 10.19.56 AM

In this configuration, the circleElem is the cube and setTransform method takes the hand position and hand rotation from Leap Motion. Then, the programmer can dynamically change the transform style of the cube inside of this function.

Screen Shot 2015-07-23 at 10.21.28 AM

All in all, it’s fairly simple to get started with plain DOM elements as opposed to finding or creating 3D models with any of the popular 3D modeling softwares currently available. It’s promising, with CSS3 Transforms, CSS Shapes, and modern browsers’ SVG, – we’re embarking on a nearly real 3D interactive experience for the web.


My personal interest was in connecting skeuomorphic digital UI with the real-life counterparts (for example, a button online that looks three dimensional is visually mimicking a physical button). I decided to focus on hover states. Partly because unlike things like scrolling or clicking, they stand out as an interaction that in real life would really appear to be magic. To bring hover states into the real world I constructed a panel covered with a grid of squares cut from thick board.

Under roughly half of the squares were magnets. Half of these had northern polarity, while the other half had southern polarity. To interact with the board users put a magnetic controller (what you might call a digital era finger puppet) on their finger, and then hovered their hand across the board.


This demonstration let the user experience both the standard visual feedback of interactive elements ‘hovering’ to show their interactivity, and also an entirely new sensation when passing their hand over repellent magnets (the feeling was similar to running one’s hand over a bump). The experience was pleasant and novel, and my hope would be that hardware technology in the future can start to incorporate more instances of tactile feedback.

Although the three of us each took three different lenses for examining UI, we all reached a similar conclusion: the future of UI is rife with possibility. That means understanding the past, but always challenging its conventions. That means embracing new technology, but critically evaluating its use and construction. And that means building bold, new stuff.

This post is just a short overview of all the crazy and wonderful things we learned this past Hackweek. To see more of Hackweek in action, be sure to check out

P.S. Want to join our team? We’re hiring: Chartcorps, Product Design, and Marketing Design. Check out the openings here.

This is the second post in a series on how we used the design process to prototype, build, and test a brand new support site.

By collaborating with our designers, the Chartcorps (our client support team) was able to create a brand new support site experience from scratch. Last week, my teammate Chris walked us through the first three steps of our design process that began with understanding, defining, and ideating a solution to a problem. This week, I’ll be discussing how we took those ideas from the whiteboard to the web in the next three phases: prototyping, building, and testing.

Phase 4: Prototype

In the Ideate phase, we were able to lay out the informational groundwork, but we still needed to figure out details around navigational flow and visual consistency.

An early mockup of navigation and flow.

Chris and I used Sketch and InVision to generate quick wireframes to get a better sense of how users could interact with the site. We made a few versions to compare and get feedback from the rest of the team and our designers — an incredibly useful step for planning how to scale our site for future resources.

After finalizing the overall structure, we moved on to higher fidelity mockups to find a consistent style and voice for our product. We wanted the support site to feel like an extension of the tools that customers were used to using, but also serve as an easy channel to connect with us.

Thankfully, our designers had already put together a style guide that helped us quickly get from wireframes to mockups. We replaced our Lorem Ipsum with real text, drew our own illustrations, and decided on a unique, but integrated, color theme.

Palette from our Style Guide

We found that prototyping was a quick and efficient method to diverge and converge on many ideas with constant revisions along the way. And since Chris and I were new to coding, prototyping helped us nail down design decisions before tackling code.

Phase 5: Build

When new members of Chartcorps come on board, we encourage everyone to get comfortable with code using resources such as Harvard’s online CS50 course, Codecademy, and Learn Python the Hard Way (which Chartbeat Developers actually TA once a week for the whole company!). Chris and I had also learned a lot on the job, but we really wanted to take all this knowledge and build something ourselves.

We were familiar with writing HTML, CSS, and JavaScript, but putting everything all together in a cohesive, scalable system was a brand new experience for us. Enter, the static-site generator—we choose Jekyll.

There are a lot of different options out there, but after we experimented with both traditional CMS’s and static-site generators, we found that the latter offered better version control and flexibility for what we were trying to build. Jekyll was our favorite because it was beginner-friendly and a few of our designers had experience using it.

While building the site, we found a lot of parallels with the design process. We spent a lot of time understanding, ideating, and prototyping different file structures to reflect the design of the site from the user’s view. We ended up using a cascade of templates and section-specific files to have a modular site that new members of Chartcorps could easily add to or edit.

Phase 6: Iterate & Learn

After many weeks of coding we were finally at the point of launch! We decided to start off with a soft roll out to test and get more feedback.

We held various focus groups with members of our Product Outreach Team to see if they could find all the resources in their new locations. We also attended the design review meetings that our Design and Marketing teams hold bi-weekly for additional feedback.

Of course the last step of any design process is continuous iteration. Since this is our first run at building a whole support site from conception to execution, we’d love to hear your thoughts: What are we missing? Does the structure make sense? Did you find what you were looking for?

Just as Chris and I used this project as a chance to practice and learn, it’s had a similar effect on everyone on Chartcorps. One of our teammates, who’s more interested in data and integrations than front-end design, used API endpoints from the Chartbeat Status Page to power a real-time widget to instantly tip people off to any downgraded performance issues!

Check it out, we’re live!

The Support Site project that Chris and I set out to build is becoming much more than just a side project for us. It’s turning into a way for all members of Chartcorps to explore and develop their technical skills and interests.

For Chris and me, it’s been learning more about what the web design process looks like in software companies, and how something goes from sketch, to design, to launch. And since we had such a great time collaborating with and learning from Chartbeat design team, we’re going to continue publishing blog posts about our Design culture.

P.S. Want to join our team? We’re hiring: Chartcorps, Product Design, and Marketing Design. Check out the openings here.