Subscribe to EagerEyes.org feed
Updated: 3 hours 26 min ago

Link: The Power of Wee Things

Wed, 04/22/2015 - 14:17



Lena Groeger (of ProPublica) has written a beautiful piece about the Power of Wee Things. She talks about using small things, multiples, and units to display data and get people interested. The article goes through many, many examples covering many different areas and ideas. She also gave a great talk on the topic at OpenVis 2014.

On a somewhat related note, Jake Harris wrote about the importance of individual items in data journalism and visualization, and how to connect with them. The two pieces work very well together to illustrate a way of visualizing data that is often overlooked.

Paper: ISOTYPE Visualization – Working Memory, Performance, and Engagement with Pictographs

Mon, 04/20/2015 - 06:15



Unit charts are not common in visualization, and they are often considered a bad idea. The same is true for using shapes other than rectangles. Neither is based on much actual research, however. In a new paper, we look at the specific example of ISOTYPE-style charts – and find them to be quite effective.

I have written about ISOTYPE before: Otto and Marie Neurath developed the idea in the 1920s, Gerd Arntz created the iconic shapes. Neurath’s idea was to communicate facts about the world in terms of numbers that would be easy to understand. The charts they produced showed data like the kinds of technology used by people (radio receivers, cars, telephones), changes in the way people worked through the course of the industrial revolution, etc.

For a paper presented this week at CHI 2015, Steve Haroz, Steven Franconeri (both at Northwestern University), and I conducted a number of studies to gauge how well people could read these charts, how well they would remember what they had seen, and how engaging they found them. The different experiments had varying numbers of participants, mostly in the range of 30-50.

Here is the kind of image we used in the study to represent the ISOTYPE style. We focus on just the idea of repeating small icons. Icon shapes were drawn from a large number of different types of things, animals, etc.

We compared this to four others: basic bar charts, stacked circles (to see if stacking alone would be better), scaled objects, and a superfluous image in the background.

In broad strokes, it turns out that repeated objects are easier to read and compare, as long as the number is low. But since ISOTYPE icons always represent some multiple anyway, that is not a significant limitation.

Memory is also improved when using icons instead of generic shapes. This is not surprising, though it is worth pointing out that this did not come with a decrease in reading speed of the ISOTYPE charts when compared to bar charts. Also, using icons as labels (instead of words) for bars did not work nearly as well.

Lest you think that this study can be used to justify chart junk, we also found that the superfluous object in the background (bottom right in the image above) was highly distracting and interfered with memory and reading performance.

These are just some of the results, there are many more in the paper. Steve Haroz has put together a nice little landing page for the project with key take-aways and design tips, as well as the interactive playground to create ISOTYPE images. You can even try the studies yourself!

Steve Haroz, Robert Kosara, Steven P. Franconeri, ISOTYPE Visualization – Working Memory, Performance, and Engagement with Pictographs; Proceedings CHI, pp. 1191–1200, 2015

Link: Design and Redesign in Data Visualization

Wed, 04/15/2015 - 14:17



Fernanda Viégas and Martin Wattenberg have written a wonderful piece titled Design and Redesign in Data Visualization about criticism in data visualization. They thoughtfully analyze the practice and point out some of the issues when people create redesigns, including intellectual honesty and perfect hindsight.

They then go on to define some “rules of engagement” for a more reasonable approach to redesign. They argue for a kinder, more respectful, and more balanced process. Their ideas are informed by the critique in design and certainly make a lot of sense for visualization.

The piece was originally written for the Malofiej 22 book, and I’m glad that it has been published for a wider audience.

Link: Dear Data

Wed, 04/01/2015 - 14:17



Giorgia Lupi and Stefanie Posavec are collaborating on a clever and beautiful new project they call Dear Data (Twitter account). Every week, they are sending post cards to each other with hand-drawn visualizations of data they have gathered: public transportation, ways they communicate, etc.

Giorgia and Stefanie are two of the most interesting people working in data visualization/design/art right now. Both are also incredibly skilled and creative designers, well worth watching.

Link: Data Journalism in the 19th Century

Wed, 03/25/2015 - 14:17



Scott Klein of ProPublica has written a great story about an early use of data in journalism, and Horace Greeley, the colorful journalist behind it. Greeley found an issue and then gathered the data to show the extent of the problem. This is not unlike today.

In Greeley’s case, the issue was how much money members of Congress were paid for their travels to their home states, despite modern conveniences like railroads that made those journeys much faster than they had been in the past.

The story is very well written and represents an important piece of history and context for today’s practice of data journalism.


Mon, 03/23/2015 - 02:55



In watches, a complication is anything that goes beyond the basic function of showing the current time: alarm time, moon phase, etc.  I think the term should be adopted in user interface design and visualization.

With their upcoming Watch, Apple is clearly playing to horology and the long history behind the design of classic watches. They use many watch terms, even where they don’t really make sense – like movement for the watch core CPU (which has no moving parts and does way more than a typical movement that just moves the hands).

One of the terms that they use is complications. They nicely illustrate the idea in an image on their website.

Among the Watch’s complications are tiny widgets that show the current temperature, a stock price, the wearer’s activity level, etc. They can be turned on and selected by the user.

What does this have to do with visualization or user interfaces? I think there is a deep cleverness in calling these things complications beyond just the cutesy superficial one.

Anything that does not serve the basic function of a watch, user interface, or visualization should be called a complication. That doesn’t mean it needs to be removed. After all, complications can make a watch unique and useful. But it needs to be questioned. It needs to be examined and weighed against the distraction and clutter it causes.

In a watch, it’s clear what its central function is. That is often not nearly as clear in an application or a visualization. But thinking about it in terms of complications might help: if I remove this element/button/label/data series, does it still work? If it does, do I still want to keep the element?

This is similar to the idea that design is about reducing things to their minimal functional and aesthetic core. Thinking of design elements as complications rather than in strict terms like the data-ink ratio turns the question from a supposed rule to one of figuring out the  trade-offs. And those can be quite different depending on the goals and the context. A useless item in one might be crucial to understanding a piece or engage an audience in another.

Complications as a concept is nice because it opens up a conversation. It moves us beyond supposedly strict and straightforward rules that seem to be set in stone. Many things are more complicated than that.

Teaser image from Wikimedia, used under creative commons.

Link: CG&A Article on Tapestry

Wed, 03/18/2015 - 14:17



I’ve written a short piece about the Tapestry conference for the Graphically Speaking column in Computer Graphics and Applications. The article talks about the reasoning behind Tapestry, how it’s different from academic conferences, and gives a few examples of talks. It even includes anecdotal evidence to show that that conference has enabled actual knowledge transfer.

If you prefer a PDF to the e-reader version, you can find that on my vanity website.

The Value of Illustrating Numbers

Mon, 03/09/2015 - 04:37



Showing data isn’t always about trying to convey an insight, or giving people the means to understand the intricacies of data. It can also be a tool to communicate a fact, an amount, or an issue beyond just the sheer numbers. Data illustration is poorly understood, but it can be very powerful.

About a year ago, I had a Twitter conversation with Stefanie Posavec about the difference between data visualization and data illustration. We ended up with a sort of definition.

@stefpos so I guess if the goal is to impress, inspire awe, or to make people wonder, it’s illustration; if it’s to inform, it’s vis.

— Robert Kosara (@eagereyes) March 12, 2014

The source of this discussion was a video that was making the rounds at the time, and which I didn’t find quite as impressive as a lot of other people. It shows flights in Europe over 24 hours.

I’ve since changed my mind about it, but two things bothered me: it was considered a visualization, even though I found that you could glean very little insight from it; and it did a poor job of showing how busy some of Europe’s busiest airports were. That latter part is because it shows the actual flight paths, which all merge on final approach to landing, so the planes all blend together.

Why Illustrate?

All data is not created equal. There is basically no data you can’t turn into a bar chart and get some sort of insight from. But some data just requires a bit more care and thought – not because of its structure, but because of what it represents.

When it comes to data about people, perhaps the approach needs to be a bit more thoughtful and respectful. Looking at data about homeless people, do we really need yet another goddamn bar chart? Is there not a more appropriate way to look at this data? Or think of the design process and thought that went into the 9/11 Memorial. This isn’t the phone book, these are all individuals who died in horrendous ways.

Other examples include Periscopic’s U.S. Gun Deaths, Pitch Interactive’s Out of Sight, Out of Mind (about drone strikes), and the Huffington Post’s Mapping the Dead (gun deaths for about three months after the Sandy Hook shooting in late 2012). They do not attempt to put numbers into perspective, because the perspective is not what matters. The goal is punch you in the gut, to make you feel something.

Numbers always invite comparison. But there is a point at which comparison becomes distraction and an excuse to minimize the number. Were there more traffic deaths than gun deaths in 2013? Probably. What other numbers can you find that are higher? But that’s missing the point: to show the sheer number, no matter how it compares. It’s the absolute number that counts. Everyone of those deaths is one too many.

What the gun deaths and drone strikes pieces (and, to an extent, the flights in Europe video) do so well is give you a sense of a number. This is the number, period. Don’t try to compare, just appreciate that number. Get a sense of its magnitude. Feel it.

Illustration vs. Visualization

Among the many things we don’t yet understand is the relationship and value of illustration compared to visualization. Illustrating a number is a noble endeavor, and one that needs to be done with thought and care. It’s not an easy decision to make, and it’s often hard to tell what a particular piece was built to achieve (and whether its creator had a clear enough idea).

The examples above give a bit of a sense of where I think things are headed. And I hope that we can be a bit clearer in distinguishing illustrations and visualization. Because both are important when trying to get a point across.

Link: The Graphic Continuum

Wed, 02/25/2015 - 15:17



The Graphic Continuum is a poster created by Jon Schwabish and Severino Ribecca (the man behind the Data Visualisation Catalogue). It lists almost 90 different chart types and organizes them into five large groups: distribution, time, comparing categories, geospatial, part-to-whole, and relationships. Some of them are connected across groups where there are further similarities.

The poster is printed very nicely and makes for a great piece of wall art to stare at when thinking about data, and maybe to get an idea for what new visualization to try.

Link: Joint Committee on Standards for Graphic Presentation (1915)

Wed, 02/18/2015 - 15:17



An article in the Publications of the American Statistical Association by the Joint Committee on Standards for Graphic Presentation laid down some standards for how to create good data visualizations. In 1915. The chairman of that committee was none other than Willard C. Brinton, author of the highly opinionated (and much more complete) Graphic Methods for Presenting Facts. Andy Cotgreave is collecting some tidbits and highlights from Brinton’s books.

Video: Nigel Holmes on Humor in Visualization and Infographics

Wed, 02/11/2015 - 15:17



In this talk, Nigel Holmes talks about the value of and use of humor in communicating visualization. He also has some interesting criticism of academic visualization research (and also some more artistic pieces). It’s a fun and interesting talk, as always with Nigel Holmes.

Link: Becksploitation: The Over-Use of a Cartographic Icon

Wed, 02/04/2015 - 15:17



The paper Becksploitation: The Over-Use of a Cartographic Icon by Kenneth Field and William Cartwright (free pre-print PDF) in The Cartographic Journal describes the Harry Beck’s famous map of the London Underground and what makes it great. It also offers a collection of misuses of the superficial structure, and critiques them. I wish we’d had papers (and titles!) like this in visualization.

The paper is available online for free for the next twelve months, along with a selection of other Editor’s Choice papers (including Jack van Wijk’s Myriahedral Projections paper – watch the video if you haven’t seen it).

Among the many interesting tidbits in the paper is this clever image by Jamie Quinn. The subway map to end all subway maps (other than for subways).

Spelling Things Out

Tue, 02/03/2015 - 04:55



When visualizing data, we often strive for efficiency: show the data, nothing else. But there can be tremendous value in redundancy to make a point and drive it home. Two recent examples from news graphics illustrate this nicely.

The first is this animated chart of global temperatures from 1881 to 2014. It shows more data than is really needed. Why show monthly data when talking about the yearly average? Why the animation of all those lines when you could just show a bar chart of the yearly averages?

But that is exactly what makes this chart work. By watching the yearly average increase, you get a much clearer (and more urgent!) sense of how temperatures are rising. The little indicator of when a new record is set doesn’t show up often at first, but then keeps going off. It’s a smart piece that takes the data and turns it into a statement.

If you haven’t seen the animated version, it’s well worth spending a minute watching. This is the difference between data analysis and communication.

The other example is a static image comparing two numbers. The numbers aren’t terribly difficult to understand or compare. They’re not even particularly big. One number is 4, the other 644. There’s clearly a difference between them, but just reading them you might not think that much of it. However, the point is driven home by actually showing the number as little icons of people.

The point this article about politicians’ health priorities becomes much more urgent through this type of information graphic than just throwing around abstract numbers. You can ignore a number you read, but you can’t ignore this visual comparison.

If anything, I think it’s a mistake to overlap the icons, which compresses them and makes it harder to appreciate the actual number. Spelling it out even more with neatly aligned, non-overlapping figures would make this point even more clearly.

Efficiency clearly has its place in visualization, in particular in analysis. But knowing when the right choice is not the efficient one is what makes all the difference when it comes to communication.

Link: Tapestry 2015

Wed, 01/28/2015 - 15:17



Tapestry 2015 will take place March 4 in Athens, GA. This is the third time we are holding the conference, and it is again taking place on the day before NICAR. As in the past years, have a kick-ass line-up of speakers. The keynotes will be given by Hannah Fairfield (NY Times), Kim Rees (Periscopic), and Michael Austin (Useful Fictions). We also have a great set of short stories speakers: Chad Skelton (Vancouver Sun), Ben Jones (Tableau Public), Katie Peek (Popular Science), RJ Andrews (Info We Trust), and Kennedy Elliott (Washington Post).

On the website, you can watch a brief video summary (click on See what Tapestry is all about) or see all the talk videos from last year. We have also posted information on how to get there, and there will be a bus to take you to NICAR after the event.

Time is running out to apply for an invitation if you want to attend. Attendance is limited, and we’re trying to keep the event small and focused.

Seminal InfoVis Paper: Treisman, Preattentive Processing

Mon, 01/26/2015 - 04:26



A paper on a specific cognitive mechanism may seems like an odd choice as the first paper in this series, but it is the one that sparked the idea for it. It is also the one that has its 30th birthday this year, having been published in August 1985. And it is an important paper, and could play an even bigger role in visualization if properly understood and used.

Preattentive Processing in Vision

Anne Treisman’s work is unfortunately misunderstood about as often as her name is misspelled (it’s not i before e). Her paper, Preattentive Processing in Vision in the journal Computer Vision, Graphics, and Image Processing (August 1985, vol. 31, no. 2, pp. 157–177), describes a mechanism in our perceptual system that allows us to perform a small set of well-defined operations on certain, well-defined visual properties without the need for conscious processing or serial search.

This is often demonstrated with the so-called pop-out effect. Count the 9s in the following image (this example is stolen from Stephen Few).

Not easy, and in particular it requires scanning the image line by line. You can’t quickly find a shape like a 9 among very similar ones (like 8s, 6s, and 3s). Now let’s try this in a way that activates your preattentive processor.

Much easier! The 9s pop out. You can’t not see them. They’re there, easy to count. What you can do now is

  • detect their presence (or tell their absence) and point to where they were
  • estimate how many there were as a fraction of the total number of objects
  • detect boundaries between groups of objects that have similar properties (i.e., if the 9s were grouped in some way, you would perceive that as a shape).

All of this is possible even if you only saw this image for a fraction of a second (50–200ms) and your precision does not change significantly if we were to increase the number of objects (up to a reasonable limit).

There are a number of visual properties that this works for, including color, size, orientation, certain texture and motion attributes, etc. Chris Healey has a great webpage with demos and a more complete list.

Combining preattentive features is problematic: if the numbers were, say, blue and orange, and I wanted you to just count the bold orange ones, you’d still have to search serially (this is called conjunctive search). If they were combined so that the combination was unique (i.e., all bold digits were also orange, but there was nothing else that was orange or bold), that would make things easier and would still be preattentive (disjunctive search).

This is a very interesting effect for a number of reasons, and it can be used quite effectively in visualization. But it’s also important to understand its significant limitations. Used with great restraint, preattentive processing can be used to great effect. But not every use of a strong color contrast means that you’re using a preattentive feature.

Taking Things Further

Treisman’s paper is cited quite a bit in visualization, but it doesn’t always extend beyond lip service. One of the key issues seems to be a misunderstanding of what preattentive features really are, and what sorts of tasks preattentive processing can perform.

But more than that, it’s about restraint. Most visualization systems have way too much going on to be able to make use of preattentive features. A system could conceivably drop all its colors to gray when it wants to point something out using color, and only use color on those parts. Or it could provide certain types of filters or highlights that make use of specific features and are smart about not creating conjunctive searches. Or perhaps even just use it to highlight similar things when hovering.

I don’t believe that we have seen the real power of preattentive processing in visualization yet. What about using it to help people look for clusters in scatterplots? How about dense representations like heat maps? Perhaps there are even specific new techniques that could capitalize on these properties in ways existing ones can’t.

Thirty years after the discovery of the effect, there is still tremendous opportunity to unpack it, understand it, and make use of it in visualization.

Seminal InfoVis Papers: Introduction

Mon, 01/26/2015 - 03:01



Some of the most fundamental and important papers in information visualization are around 30 years old. This is interesting for several reasons. For one, it shows that the field is still very young. Most research fields go back much, much further. Even within such a short time frame, though, there is a danger of not knowing some of the most important pieces of research.

While 30 years is not much, it is also a lot. Some papers get cited over and over again, but more for convenience than with an eye towards truly building upon them and questioning them. They are treated as gospel a bit too much.

The goal of this little series is to describe a few of the most fundamental papers (not just ones that are that old, but also a few more recent ones). I don’t just want to summarize the papers though, but show the way forward: what work has been done since, what questions remain open, what new work could be done based on them?

A paper’s publication is only the beginning. Its value comes from the work that is built on top of it, questioning it, improving upon it – and, sometimes, proving it wrong.

Link: Data Stories Podcast 2014 Review

Thu, 01/22/2015 - 15:17



Episode 46 of the Data Stories podcast features Andy Kirk and yours truly in an epic battle for podcast dominance a review of the year 2014. This complements well my State of Information Visualization posting, and of course there is a bit of overlap (I wrote that posting after we recorded the episode – Moritz and Enrico are so slow). There are lots of differences though, and the podcast has the advantage of not just me talking. We covered a lot of ground there, starting from a general down about the year, to end up finding quite a few things to talk about (just check out the long list of links in the show notes!).

Link: Data Viz Done Right

Wed, 01/21/2015 - 15:17



Andy Kriebel’s Data Viz Done Right is a remarkable little website. He collects good examples of data visualization and talks about what works and what doesn’t. He does have bits of criticism sometimes, but he always has more positive than negative things to say about his picks. Good stuff.

Why Is Paper-Writing Software So Awful?

Mon, 01/19/2015 - 03:58



The tools of the trade for academics and others who write research papers are among the worst software has to offer. Whether it’s writing or citation management, there are countless issues and annoyances. How is it possible that this fairly straightforward category of software is so outdated and awful?

Microsoft Word

The impetus for this posting came from yet another experience with one of the most widely used programs in the world. Among some other minor edits on the final version of a paper, I tried to get rid of the blank page after the last one. Easy, just delete the space that surely must be there, right? No, deleting the space does nothing. It doesn’t get deleted, or it comes back, or I don’t know what.

So I select the entire line after the last paragraph and delete that. Now the last page is gone, but the entire document was also just switched from a two-column layout to a single column. Great.

People on Twitter tell me that Word stores formatting information in invisible characters at the end of paragraphs. That may the case, I really do not care. But that it’s possible for me to delete something I can’t see and thus screw up my entire document has  to be some sort of cruel joke. Especially for a program that has been around for so long and is used by millions of people every day.

Word has a long history (it was first released in 1983, over 30 years ago), and carries an enormous amount of baggage. Even simple things like figure captions and references are broken in interesting ways. Placing and moving figures is problematic to say the least. Just how poorly integrated some of Word’s features are becomes apparent when you try to add comments to a figure inside a text box (you can’t) or replace the spaces before the square brackets inserted by a citation manager with non-breaking ones (Word replaces the entire citation rather than just the opening bracket, even though only the bracket matches the search).

In trying to be everything to everybody, Word does many things very, very poorly. I have tried alternatives, but they are universally worse. I generally like Pages, but its lack of integration with a citation manager (other than the godawful Endnote) makes it a no-go.


We all know that you write serious papers in LaTeX, right? Any self-respecting computer scientist composes his formula-laden treatises in the only program that can insert negative spaces exactly where you need them. LaTeX certainly doesn’t have the issues Word has, but it has its own set of problems that make it only marginally better (if at all).

It is also starting to seriously show its age. TeX, which is the basic typesetting system LaTeX is based on, was released in 1978 – almost 40 years ago. LaTeX made its debut in 1984, over 30 years ago. These are some of the oldest programs still in widespread use, and LaTeX isn’t getting anywhere near the development resources Word does.

While a lot of work has been done to keep it from falling behind entirely (just be thankful that you can create PDFs directly without even having to know what a dvi file is, or how bad embedded bitmap fonts were), there are also tons of issues. Need a reference inside a figure caption? Better know what \protect does, or the 1970s-era parser will yell at you. Forgot a closing brace? Too bad, you’ll have to find it by scanning through the entire document manually, even though TeX’s parser could easily tell you if it had been updated in the last 20 years. Want to move a figure? Spend 15 minutes moving the figure block around in the text and hope there’s a place where it’ll fall where you want it. And the list goes on.

And then there are the errors you can’t even fix directly. The new packages that insert links into the bibliography are great, except when the link breaks over a column boundary, which causes an error that you can’t avoid. All you can do is add or remove text so the column boundary falls differently. Great fun when this happens right before a deadline.

Citation Managers

In the old days, putting your references together was a ton of work: you had to collect them in one place, keep the list updated when you wanted to add or remove one, then sort and format them, and maybe turn the placeholder references in the paper text into numbers. Any time you’d add or remove one, you had to do it over again.


Enter bibliography software. In the dinosaur corner, we have bibTeX. As the name suggests, it works with (La)TeX. And it’s almost as old, having been released in 1985. It uses a plain text file with a very simple (and brittle) format for all its data, and you have to run LaTeX three times to make sure all references are really correct. This puts even the old two-pass compilers to shame, but that’s how bibTeX works.

There are programs that provide frontends for these text files, and they’re mostly ugly and terrible. A notable exception here is BibDesk, especially if you’re in the life sciences. It works really well and doesn’t get in the way. It’s an unassuming little program, and it gets updated pretty continuously. What it does, it does really quite well.

But the rest of the field is as horrifying a train wreck as the writing part.


I can’t quite share in the doomsday-is-here wailing that started when Elsevier bought Mendeley, and I haven’t seen any terrible decisions yet. What drives me up the wall are simply the bugs and the slowness and the things you expect to work but don’t.

Why does All Documents not include all documents? Why do I have to drag a paper I imported into a group into All Documents so it shows up there? Why are papers in groups copies instead of references, so that when I update one, the other one doesn’t get updated? The most basic things are so incredibly frustrating.

To be fair, Mendeley is constantly improving and is nowhere near as terrible as it was a year or two ago. It still has a ways to go, though. And I really hope they get serious about that iPad app at some point.


I’m trying to love Papers. I really do. It’s a native Mac app (though there’s now also a Windows version). It looks good. But it manages to be buggy and annoying in many places where Mendeley works well.

For one, the search in Papers is broken. I cannot rely on it to find stuff. It’s an amazingly frustrating experience when you search for an author and can’t see a particular paper you’re sure is there, and then search for its title and there it is! The ‘All Fields’ setting in the search also doesn’t seem to include nearly all fields, like the author. And matching papers against the global database has its own set of pitfalls and annoyances (like being able to edit fields in a matched paper only to have your edits cheerfully thrown away when you’re not looking). The list goes on (don’t open the grid view if you have large PDFs in your collection, etc.).


Listed only for completeness. Beyond terrible. Written by some sort of committee that understands neither paper writing nor software. I really can’t think of any non-academic commercial software that’s worse (within the category of software for academic users, it’s neck and neck with that nightmare that is Banner).

A Better Way?

How is it possible that the tools of the trade for academics are so outdated, insufficient, and just plain terrible? Is there really nothing in writing that is smarter than treating text as a collection of letters and spaces? Can’t we have a tool that manages reasonable layout (the stuff that LaTeX is good at without the parts it sucks at) with a decent reference manager?

This isn’t rocket surgery. All these things have well-known algorithms and approaches (partly due to the work that went into TeX and other systems). There have also been advances since the days when Donald Knuth wrote TeX. Having classics to look back at is great, but being stuck with them is not. And it’s particularly infuriating in what is supposed to be high technology.

What I understand even less is that there are no tools that consider text in a way that’s more semantic. Why can’t I treat paragraphs or sections as objects? Why doesn’t a section know that its title is part of it and thus needs to be included when I do something to it? Why don’t word processor allow me to fold a paragraph or section or chapter, like some source code editors do? Why can’t figures float while I move them and anchor to only certain positions given the constraints in a template?

There are so many missed opportunities here, it’s breathtaking. There has to be a world beyond the dumb typewriters with fancy clipart we have today. Better, more structured writing tools (like Scrivener, but with a reference manager) have got to be possible and viable as products.

We can’t continue writing papers with technology that hasn’t had any meaningful updates in 30 years (LaTeX) or that tries to cover everything that contains text in some form (Word). There has got to be a better way.

Links: 2014 News Graphics Round-Ups

Wed, 01/14/2015 - 15:17



In the past, it used to be difficult to find news graphics from the main news organizations. In the last few years, they have started to post year-end lists of their work, which are always a treat to walk through. With the new year a few weeks behind us, this is a good time to look at these as collections of news graphics.

Slightly different, but worth a special mention, is NZZ’s amazing visualization of all their articles from the year, Das Jahr 2014 in der «Neuen Zürcher Zeitung» (in German).