Another word for it

Subscribe to Another word for it feed
Updated: 2 weeks 19 hours ago

Software Security (MOOC, Starts October 13, 2014!)

Wed, 10/08/2014 - 00:21

Categories:

Topic Maps

Software Security

From the post:

Weekly work done at your own pace and schedule by listening to lectures and podcasts, completing quizzes and exercises and peer evaluations. Estimated time commitment is 4 hours/week. Course runs for 9 weeks (ends December 5)


This MOOC introduces students to the discipline of designing, developing, and testing secure and dependable software-based systems. Students will be exposed to the techniques needed for the practice of effective software security techniques. By the end of the course, you should be able to do the following things:

  • Security risk management. Students will be able to assess the security risk of a system under development. Risk management will include the development of formal and informal misuse case and threat models. Risk management will also involve the utilization of security metrics.
  • Security testing. Students will be able to perform all types of security testing, including fuzz testing at each of these levels: white box, grey box, and black box/penetration testing.
  • Secure coding techniques. Students will understand secure coding practices to prevent common vulnerabilities from being injected into software.
  • Security requirements, validation and verification. Students will be able to write security requirements (which include privacy requirements). They will be able to validate these requirements and to perform additional verification practices of static analysis and security inspection.



This course is run by the Computer Science department at North Carolina State University.

Register

One course won’t make you a feared White/Black Hat but everyone has to start somewhere.

Looks like a great opportunity to learn about software security issues and to spot where subject identity techniques could help collate holes or fixes.

The Definitive “Getting Started” Tutorial for Apache Hadoop + Your Own Demo Cluster

Wed, 10/08/2014 - 00:11

Categories:

Topic Maps

The Definitive “Getting Started” Tutorial for Apache Hadoop + Your Own Demo Cluster by Justin Kestelyn.

From the post:

Most Hadoop tutorials take a piecemeal approach: they either focus on one or two components, or at best a segment of the end-to-end process (just data ingestion, just batch processing, or just analytics). Furthermore, few if any provide a business context that makes the exercise pragmatic.

This new tutorial closes both gaps. It takes the reader through the complete Hadoop data lifecycle—from data ingestion through interactive data discovery—and does so while emphasizing the business questions concerned: What products do customers view on the Web, what do they like to buy, and is there a relationship between the two?

Getting those answers is a task that organizations with traditional infrastructure have been doing for years. However, the ones that bought into Hadoop do the same thing at greater scale, at lower cost, and on the same storage substrate (with no ETL, that is) upon which many other types of analysis can be done.

To learn how to do that, in this tutorial (and assuming you are using our sample dataset) you will:

  • Load relational and clickstream data into HDFS (via Apache Sqoop and Apache Flume respectively)
  • Use Apache Avro to serialize/prepare that data for analysis
  • Create Apache Hive tables
  • Query those tables using Hive or Impala (via the Hue GUI)
  • Index the clickstream data using Flume, Cloudera Search, and Morphlines, and expose a search GUI for business users/analysts

I can’t imagine what “other” tutorials that Justin has in mind.

To be fair, I haven’t taken this particular tutorial. Hadoop tutorials you suggest as comparisons to this one? Your comparisons of Hadoop tutorials?

History of Apache Storm and lessons learned

Wed, 10/08/2014 - 00:00

Categories:

Topic Maps

History of Apache Storm and lessons learned by Nathan Marz.

From the post:

Apache Storm recently became a top-level project, marking a huge milestone for the project and for me personally. It’s crazy to think that four years ago Storm was nothing more than an idea in my head, and now it’s a thriving project with a large community used by a ton of companies. In this post I want to look back at how Storm got to this point and the lessons I learned along the way.

The topics I will cover through Storm’s history naturally follow whatever key challenges I had to deal with at those points in time. The first 25% of this post is about how Storm was conceived and initially created, so the main topics covered there are the technical issues I had to figure out to enable the project to exist. The rest of the post is about releasing Storm and establishing it as a widely used project with active user and developer communities. The main topics discussed there are marketing, communication, and community development.

Any successful project requires two things:

  1. It solves a useful problem
  2. You are able to convince a significant number of people that your project is the best solution to their problem

What I think many developers fail to understand is that achieving that second condition is as hard and as interesting as building the project itself. I hope this becomes apparent as you read through Storm’s history.

Every project/case is somewhat different but this history of Storm is a relevant and great read!

I would highlight: It solves a useful problem.

I don’t read that to say:

  • It solves a problem I want to solve
  • It solves a problem you didn’t know you had
  • It solves a problem I care about
  • etc.

To be a “useful” problem, some significant segment of users must recognize it as a problem. If they don’t see it as a problem, then it doesn’t need a solution.

Boiling Sous-Vide Eggs using Clojure’s Transducers

Tue, 10/07/2014 - 23:45

Categories:

Topic Maps

Boiling Sous-Vide Eggs using Clojure’s Transducers by Stian Eikeland.

From the post:

I love cooking, especially geeky molecular gastronomy cooking, you know, the type of cooking involving scientific knowledge, -equipment and ingredients like liquid nitrogen and similar. I already have a sous-vide setup, well, two actually (here is one of them: sousvide-o-mator), but I have none that run Clojure. So join me while I attempt to cook up some sous-vide eggs using the new transducers coming in Clojure 1.7. If you don’t know what transducers are about, take a look here before you continue.

To cook sous-vide we need to keep the temperature at a given point over time. For eggs, around 65C is pretty good. To do this we use a PID-controller.

I was hoping that Clojure wasn’t just of academic interest and would have some application in the “real world.” Now, proof arrives of real world relevance!

For those of you who don’t easily recognize humor, I know that Clojure is used in many “real world” applications and situations. Comments to that effect will be silently deleted.

Whether the toast and trimmings were also prepared using Clojure the author does not say.

Magna Carta Ballot – Deadline 31 October 2014

Tue, 10/07/2014 - 21:46

Categories:

Topic Maps

Win a chance to see all four original 1215 Magna Carta manuscripts together for the first time #MagnaCartaBallot

From the post:

Magna Carta is one of the world’s most influential documents. Created in 1215 by King John and his barons, it has become a potent symbol of liberty and the rule of law.

Eight hundred years later, all four surviving original manuscripts are being brought together for the first time on 3 February 2015. The British Library, Lincoln Cathedral and Salisbury Cathedral have come together to stage a one-off, one-day event sponsored by Linklaters.

This is your chance to be part of history as we give 1,215 people the unique opportunity to see all four Magna Carta documents at the British Library in London.

The unification ballot to win tickets is free to enter. The closing date is 31 October 2014.

According to the FAQ you have to get yourself to London on the specified date and required time.

Good luck!

Bioinformatics tools extracted from a typical mammalian genome project

Tue, 10/07/2014 - 00:55

Categories:

Topic Maps

Bioinformatics tools extracted from a typical mammalian genome project

From the post:

In this extended blog post, I describe my efforts to extract the information about bioinformatics-related items from a recent genome sequencing paper, and the larger issues this raises in the field. It’s long, and it’s something of a hybrid between a blog post and a paper format, just to give it some structure for my own organization. A copy of this will also be posted at FigShare with the full data set. Huge thanks to the gibbon genome project team for a terrific paper and extensively-documented collection of their processes and resources. The issues I wanted to highlight are about the access to bioinformatics tools in general and are not specific to this project at all, but are about the field.

A must read if you are interested in useful preservation of research and data. The paper focuses on needed improvements in bioinformatics but the issues raised are common to all fields.

How well does your field perform when compared to bioinformatics?

TinkerPop 3.0.0.M3 Released (A Gremlin Rāga in 7/16 Time)

Tue, 10/07/2014 - 00:30

Categories:

Topic Maps

TinkerPop 3.0.0.M3 Released (A Gremlin Rāga in 7/16 Time) by Marko Rodriguez.

From the post:

TinkerPop 3.0.0.M3 has been released. This release has numerous core bug-fixes/optimizations/features. We were anxious to release M3 due to some changes in the Process API. These changes should not effect the user, only vendors providing a Gremlin language variant (e.g. Gremlin-Scala, Gremlin-JavaScript, etc.). From what I hear, it “just worked” for Gremlin-Scala so that is good. Here are links to the release:

CHANGELOG: https://github.com/tinkerpop/tinkerpop3/blob/master/CHANGELOG.asciidoc#tinkerpop-300m3-release-date-october-6-2014
AsciiDoc: http://www.tinkerpop.com/docs/3.0.0.M3/
JavaDoc: http://www.tinkerpop.com/javadocs/3.0.0.M3/
Downloads:
– Gremlin-Console: http://www.tinkerpop.com/downloads/3.0.0.M3/gremlin-console-3.0.0.M3.zip
– Gremlin-Server: http://www.tinkerpop.com/downloads/3.0.0.M3/gremlin-server-3.0.0.M3.zip

Are you going to accept Marko’s anecdotal assurances, it “just worked” for Gremlin-Scala or will you put this release to the test?

I am sure Marko and others would like to know!

Bossies 2014: The Best of Open Source Software Awards

Mon, 10/06/2014 - 21:30

Categories:

Topic Maps

Bossies 2014: The Best of Open Source Software Awards by Doug Dineley.

From the post:

If you hadn’t noticed, we’re in the midst of an incredible boom in enterprise technology development — and open source is leading it. You’re unlikely to find better proof of that dynamism than this year’s Best of Open Source Software Awards, affectionately known as the Bossies.

Have a look for yourself. The result of months of exploration and evaluation, plus the recommendations of many expert contributors, the 2014 Bossies cover more than 130 award winners in six categories:

(emphasis added)

Hard to judge the count because winners are presented one page at a time in each category. Not to mention that at least one winner appears in two separate categories.

Put into lists and sorted for review we find:

Open source applications (16)

Open source application development tools (42)

Open source big data tools (20)

Open source desktop and mobile software (14)

Open source data center and cloud software (19)

Open source networking and security software (9)

Creating the list presentation allows us to discover the actual count, allowing for entries with more than one software package mentioned, is 122 software packages.

BTW, Docker appears under application development tools and under data center and cloud software. Which should make the final count 121 different software packages. (You will have to check the entries at InfoWorld to verify that number.)

PS: The original presentation was in no discernible order. I put the lists into alphabetical order for ease of finding.

The Barrier of Meaning

Sun, 10/05/2014 - 23:40

Categories:

Topic Maps

The Barrier of Meaning by Gian-Carlo Rota.

The author discusses the “AI-problem” with Stanislaw Ulam. Ulam makes reference to the history of the “AI-problem” and then continues:

Well, said Stan Ulam, let us play a game. Imagine that we write a dictionary of common words. We shall try to write definitions that are unmistakeably explicit, as if ready to be programmed. Let us take, for instance, nouns like key, book, passenger, and verbs like waiting, listening, arriving. Let us start with the word “key.” I now take this object out of my pocket and ask you to look at it. No amount of staring at this object will ever tell you that this is a key, unless you already have some previous familiarity with the way keys are used.

Now look at that man passing by in a car. How do you tell that it is not just a man you are seeing, but a passenger?

When you write down precise definitions for these words, you discover that what you are describing is not an object, but a function, a role that is tied inextricably tied to some context. Take away that context, and the meaning also disappears.

When you perceive intelligently, as you sometimes do, you always perceive a function, never an object in the set-theoretic or physical sense.

Your Cartesian idea of a device in the brain that does the registering is based upon a misleading analogy between vision and photography. Cameras always register objects, but human perception is always the perceptions of functional roles. The two porcesses could not be more different.

Your friends in AI are now beginning to trumpet the role of contexts, but they are not practicing their lesson. They still want to build machines that see by imitating cameras, perhaps with some feedback thrown in. Such an approach is bound to fail since it start out with a logical misunderstanding….

Should someone mention this to the EC Brain project?

BTW, you may be able to access this article at: Physica D: Nonlinear Phenomena, Volume 22, Issues 1–3, Pages 1-402 (October–November 1986), Proceedings of the Fifth Annual International Conference. For some unknown reason, the editorial board pages are $37.95, as are all the other articles, save for this one by Gian-Carlo Rota. Which as of today, is freely accessible.

The webpages say Physica D supports “open access.” I find that rather doubtful when only three (3) pages out of four hundred and two (402) requires no payment. For material published in 1986.

You?

EcoData Retriever

Sun, 10/05/2014 - 23:01

Categories:

Topic Maps

EcoData Retriever

From the webpage:

Most ecological datasets do not adhere to any agreed-upon standards in format, data structure or method of access. As a result acquiring and utilizing available datasets can be a time consuming and error prone process. The EcoData Retriever automates the tasks of finding, downloading, and cleaning up ecological data files, and then stores them in a local database. The automation of this process reduces the time for a user to get most large datasets up and running by hours, and in some cases days. Small datasets can be downloaded and installed in seconds and large datasets in minutes. The program also cleans up known issues with the datasets and automatically restructures them into standard formats before inserting the data into your choice of database management systems (Microsoft Access, MySQL, PostgreSQL, and SQLite, on Windows, Mac and Linux).

When faced with:

…datasets [that] do not adhere to any agreed-upon standards in format, data structure or method of access

you can:

  • Complain to fellow cube dwellers
  • Complain about data producers
  • Complain to the data producers
  • Create a solution to clean up and reformat the data as open source

Your choice?

I first saw this in a tweet by Dan McGlinn

Gödel for Goldilocks…

Sun, 10/05/2014 - 20:27

Categories:

Topic Maps

Gödel for Goldilocks: A Rigorous, Streamlined Proof of Gödel’s First Incompleteness Theorem, Requiring Minimal Background by Dan Gusfield.

Abstract:

Most discussions of Gödel’s theorems fall into one of two types: either they emphasize perceived philosophical “meanings” of the theorems, and maybe sketch some of the ideas of the proofs, usually relating Gödel’s proofs to riddles and paradoxes, but do not attempt to present rigorous, complete proofs; or they do present rigorous proofs, but in the traditional style of mathematical logic, with all of its heavy notation and difficult definitions, and technical issues which reflect Gödel’s original exposition and needed extensions by Gödel’s contemporaries. Many non-specialists are frustrated by these two extreme types of expositions and want a complete, rigorous proof that they can understand. Such an exposition is possible, because many people have realized that Gödel’s first incompleteness theorem can be rigorously proved by a simpler middle approach, avoiding philosophical discussions and hand-waiving at one extreme; and also avoiding the heavy machinery of traditional mathematical logic, and many of the harder detail’s of Gödel’s original proof, at the other extreme. This is the just-right Goldilocks approach. In this exposition we give a short, self-contained Goldilocks exposition of Gödel’s first theorem, aimed at a broad audience.

Proof that even difficult subjects can be explained without “hand=waiving” or “heavy machinery of traditional mathematical logic.”

I first saw this in a tweet by Lars Marius Garshol.

Nobody Cares About Your “Billion Dollar Idea”

Sun, 10/05/2014 - 15:19

Categories:

Topic Maps

Nobody Cares About Your “Billion Dollar Idea” by Gary Vaynerchuk.

From the post:

I have UNLIMITED ideas. If you have the idea that’s nice, but if you don’t have the dollars or the inventory, well then, you have nothing. So, the only way you can do something about that is to go ahead and get dollars from somebody.

An echo of Jack Park’s “Just f*cking do it!,” albeit in a larger forum.

What are you doing this week to turn your idea into a tangible reality?

JUNO

Sun, 10/05/2014 - 01:09

Categories:

Topic Maps

JUNO: Juno is a powerful, free environment for the Julia language.

From the about page:

Juno began as an attempt to provide basic support for Julia in Light Table. I’ve been working on it over the summer as part of Google Summer of Code, and as the project has evolved it’s come closer to providing a full IDE for Julia, with a particular focus on providing a good experience for beginners.

The Juno plugin itself is essentially a thin wrapper which provides nice defaults; the core functionality is provided in a bunch of packages and plugins:

  • Julia-LT – which provides the basic language support for Julia in Light Table
  • Jewel.jl – A Julia source code analysis and manipulation library for Julia
  • June – Nicer themes and font defaults for LT
  • Reminisce – Sublime-style saving of files and content for LT

In case you have forgotten about Julia:

Julia is a high-level, high-performance dynamic programming language for technical computing, with syntax that is familiar to users of other technical computing environments. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. The library, largely written in Julia itself, also integrates mature, best-of-breed C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. In addition, the Julia developer community is contributing a number of external packages through Julia’s built-in package manager at a rapid pace. IJulia, a collaboration between the IPython and Julia communities, provides a powerful browser-based graphical notebook interface to Julia.

Julia programs are organized around multiple dispatch; by defining functions and overloading them for different combinations of argument types, which can also be user-defined. For a more in-depth discussion of the rationale and advantages of Julia over other systems, see the following highlights or read the introduction in the online manual.

Curious to see if this project will follow Light Table onto the next IDE project, Eve.

Why Academics Stink at Writing [Programmers Too]

Sun, 10/05/2014 - 00:45

Categories:

Topic Maps

Why Academics Stink at Writing by Steven Pinker.

From the post:

Together with wearing earth tones, driving Priuses, and having a foreign policy, the most conspicuous trait of the American professoriate may be the prose style called academese. An editorial cartoon by Tom Toles shows a bearded academic at his desk offering the following explanation of why SAT verbal scores are at an all-time low: “Incomplete implementation of strategized programmatics designated to maximize acquisition of awareness and utilization of communications skills pursuant to standardized review and assessment of languaginal development.” In a similar vein, Bill Watterson has the 6-year-old Calvin titling his homework assignment “The Dynamics of Inter­being and Monological Imperatives in Dick and Jane: A Study in Psychic Transrelational Gender Modes,” and exclaiming to Hobbes, his tiger companion, “Academia, here I come!”

Steven’s analysis applies mostly to academic writing styles, although I have suffered through more than one tome in CS that apologies for some topic X being in another chapter. Enough already, just get on with it. Needed a severe editing which would have left it shorter and an easier read.

Worth the read if you try to identify issues in your own writing style. Identifying errors in the writing style of others won’t improve your writing.

I first saw this in a twee by Steven Strogatz

PS: Being able to communicate effectively with others is essential to marketing yourself or products/services.

You Don’t Have to Be Google to Build an Artificial Brain

Sun, 10/05/2014 - 00:24

Categories:

Topic Maps

You Don’t Have to Be Google to Build an Artificial Brain by Cade Metz.

From the post:

When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence.

Applying its massive cluster of computers to an emerging breed of AI algorithm known as “deep learning,” the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.

But in the middle of this revolution, a researcher named Alex Krizhevsky showed that you don’t need a massive computer cluster to benefit from this technology’s unique ability to “train itself” as it analyzes digital data. As described in a paper published later that same year, he outperformed Google’s 16,000-machine cluster with a single computer—at least on one particular image recognition test.

This was a rather expensive computer, equipped with large amounts of memory and two top-of-the-line cards packed with myriad GPUs, a specialized breed of computer chip that allows the machine to behave like many. But it was a single machine nonetheless, and it showed that you didn’t need a Google-like computing cluster to exploit the power of deep learning.

Cade’s article should encourage you to do two things:

  • Learn GPU’s cold
  • Ditto on Deep Learning

Google and others will always have more raw processing power than any system you are likely to afford. However, while a steam shovel can shovel a lot of clay, it takes a real expert to make a vase. Particularly a very good one.

Do you want to pine for a steam shovel or work towards creating a fine vase?

PS: Google isn’t building “an artificial brain,” not anywhere close. That’s why all their designers, programmers and engineers are wetware.

General Theory of Natural Equivalences [Category Theory - Back to the Source]

Sun, 10/05/2014 - 00:08

Categories:

Topic Maps

General Theory of Natural Equivalences by Samuel Eilenberg and Saunders MacLane. (1945)

While reading the Stanford Encyclopedia of Philosophy entry on category theory, I was reminded that despite seeing the citation Eilenberg and MacLane, General Theory of Natural Equivalences, 1945 uncounted times, I have never attempted to read the original paper.

Considering I had a graduate seminar on running biblical research back to original sources (as nearly as possible), a severe oversight on my part. An article comes to mind that proposed inserting several glyphs into a particular inscription. Plausible, until you look at the tablet in question and realize perhaps one glyph could be restored, but not two or three.

It has been my experience that was not a unique case nor is it limited to biblical studies.

Category Theory (Stanford Encyclopedia of Philosophy)

Sat, 10/04/2014 - 21:50

Categories:

Topic Maps

Category Theory (Stanford Encyclopedia of Philosophy)

From the entry:

Category theory has come to occupy a central position in contemporary mathematics and theoretical computer science, and is also applied to mathematical physics. Roughly, it is a general mathematical theory of structures and of systems of structures. As category theory is still evolving, its functions are correspondingly developing, expanding and multiplying. At minimum, it is a powerful language, or conceptual framework, allowing us to see the universal components of a family of structures of a given kind, and how structures of different kinds are interrelated. Category theory is both an interesting object of philosophical study, and a potentially powerful formal tool for philosophical investigations of concepts such as space, system, and even truth. It can be applied to the study of logical systems in which case category theory is called “categorical doctrines” at the syntactic, proof-theoretic, and semantic levels. Category theory is an alternative to set theory as a foundation for mathematics. As such, it raises many issues about mathematical ontology and epistemology. Category theory thus affords philosophers and logicians much to use and reflect upon.

Several tweets contained “category theory” and links to this entry in the Stanford Encyclopedia of Philosophy. The entry was substantially revised as of October 3, 2014, but I don’t see a mechanism that allows discovery of changes to the prior text.

For a PDF version of this entry (or other entries), join the Friends of the SEP Society. The cost is quite modest and the SEP is an effort that merits your support.

As a reading/analysis exercise, treat the entries in SEP as updates to Copleston‘s History of Philosophy:

A History of Philosophy 1: Greek and Rome

A History of Philosophy 2: Medieval

A History of Philosophy 3: Late Medieval and Renaissance

A History of Philosophy 4: Modern: Descartes to Leibniz

A History of Philosophy 5: Modern British, Hobbes to Hume

A History of Philosophy 6: Modern: French Enlightenment to Kant

A History of Philosophy 7: Modern Post-Kantian Idealiststo Marx, Kierkegaard and Nietzsche

A History of Philosophy 8: Modern: Empiricism, Idealism, Pragmatism in Britain and America

A History of Philosophy 9: Modern: French Revolution to Sartre, Camus, Lévi-Strauss

Enjoy!

Code as a research object: (new phase) standardizing software metadata

Sat, 10/04/2014 - 20:35

Categories:

Topic Maps

Code as a research object: (new phase) standardizing software metadata by Abigail Cabunoc.

From the post:

At the Science Lab, we want to help research thrive on the open web. Part of this is working with other community members to build technical prototypes that move science on the web forward. Earlier this year we saw several prototypes come out of the ‘Code as a Research Object’ collaboration. Since then, there’s been more conversation and effort in this space and we wanted to share our progress and invite the community to give input.

First, a quick look at ‘Code as a Research Object’

Late last year, “Code as a Research Object” was first announced as a new collaboration between the Science Lab, GitHub, figshare and Zenodo to help explore how to better integrate code and scientific software into the scholarly workflow. Since then, we’ve seen community members come together to build prototypes allowing users to easily get a DOI for their code, making it citable and easier to incorporate into the existing credit system.

Next Steps: Standardizing Metadata

Coming into the conversation, there’s still room for best practices for code reuse and citation. In particular, some form of standardized metadata would help other repositories understand how they can integrate with current systems.

When I was at NCEAS Open Science CodeFest (OSCodeFest) last month, I led a discussion around the work being done here. I was joined by Matt Jones, Carly Strasser and Corinna Gries, and we agreed that some standards need to be set to help more groups store software in a citable and interoperable manner.

Building on the existing discussions and proposals in the community, we compared the exiting schemas for code storage to help create a metadata standard that allows for discoverability, reuse and citation. You can see the notes from our discussion here.

This led to the creation of the codemeta GitHub repo to store a minimal metadata schema for science software in code in JSON-LD and XML. Since then, we’ve worked on refining the proposed metadata schema and creating mappings between some existing popular data stores. Coming soon: Matt Jones will be blogging on some of the more technical aspects of this project.

How to get involved

We’re looking for feedback on our current proposed metadata schema for code discovery, reuse and citation.

Here is your chance to contribute to a metadata standard for some sub-set of all software.

I say a sub-set because one of the certainties of standards is that if an area needs standardization there are going to be multiple, evolving standards for it.

That’s not a criticism of standards (I actively work on several) but statement about the reality of standards. They are useful even though very few ever become universal.

Beyond Light Table

Fri, 10/03/2014 - 15:38

Categories:

Topic Maps

Beyond Light Table by Chris Granger.

From the post:

I have three big announcements to make today. The first is the official announcement of our next project. We’ve been quietly talking about it over the past few months, but today we want to tell you a bit more about it and finally reveal its name:

Eve is our way of bringing the power of computation to everyone, not by making everyone a programmer but by finding a better way for us to interact with computers. On the surface, Eve is an environment a little like Excel that allows you to “program” simply by moving columns and rows around in tables. Under the covers it’s a powerful database, a temporal logic language, and a flexible IDE that allows you to build anything from a simple website to complex algorithms. Instead of poring over text files full of abstract symbols, you interact with domain editors that are parameterized by grids of data. To build a UI you don’t open a text editor, you just draw it on the screen and drag data to it. It’s much closer to the ideal we’ve always had of just describing what we want and letting the machine do the rest. Eve makes the computer a real tool again – one that doesn’t require decades of training to use.

Imagine a world where everyone has access to computation without having to become a professional programmer – where a scientist doesn’t have to rely on the one person in the lab who knows python, where a child could come up with an idea for a game and build it in a couple of weekends, where your computer can help you organize and plan your wedding/vacation/business. A world where programmers could focus on solving the hard problems without being weighed down by the plumbing. That is the world we want to live in. That is the world we want to help create with Eve.

We’ve found our way to that future by studying the past and revisiting some of the foundational ideas of computing. In those ideas we discovered a simpler way to think about computation and have used modern research to start making it into reality. That reality will be an open source platform upon which anyone can explore and contribute their own ideas.

Chris goes onto announce that they have raised more money and they are looking to make one or more new hires.

Exciting news and I applaud viewing computers as tools, not as oracles that perform operations on data beyond our ken and deliver answers.

Except easy access to computation doesn’t guarantee useful results. Consider the case of automobiles. Easy access to complex machines results in 37,000 deaths and 2.35 million injuries each year.

Easy access to computers for word processing, email, blogging, webpages, Facebook, etc., hasn’t resulted in a single Shakespearean sonnet, much less the complete works of Shakespeare.

Just as practically, how do I distinguish between success on the iris dataset and a data set with missing values, which can make a significant difference in results when I am dragging and dropping?

I am not a supporter of using artificial barriers to exclude people from making use of computation but on the other hand, what weight should be given to their “results?”

As “computation” spreads will “verification of results” become a new discipline in CS?