Another word for it

Subscribe to Another word for it feed
Updated: 1 week 3 days ago

AverageExplorer:…

Sun, 08/17/2014 - 21:22

Categories:

Topic Maps

AverageExplorer: Interactive Exploration and Alignment of Visual Data Collections, Jun-Yan Zhu, Yong Jae Lee, and Alexei Efros.

Abstract:

This paper proposes an interactive framework that allows a user to rapidly explore and visualize a large image collection using the medium of average images. Average images have been gaining popularity as means of artistic expression and data visualization, but the creation of compelling examples is a surprisingly laborious and manual process. Our interactive, real-time system provides a way to summarize large amounts of visual data by weighted average(s) of an image collection, with the weights reflecting user-indicated importance. The aim is to capture not just the mean of the distribution, but a set of modes discovered via interactive exploration. We pose this exploration in terms of a user interactively “editing” the average image using various types of strokes, brushes and warps, similar to a normal image editor, with each user interaction providing a new constraint to update the average. New weighted averages can be spawned and edited either individually or jointly. Together, these tools allow the user to simultaneously perform two fundamental operations on visual data: user-guided clustering and user-guided alignment, within the same framework. We show that our system is useful for various computer vision and graphics applications.

Applying averaging to images, particularly in an interactive context with users, seems like a very suitable strategy.

What would it look like to have interactive merging of proxies based on data ranges controlled by the user?

Value-Loss Conduits?

Sun, 08/17/2014 - 20:52

Categories:

Topic Maps

Do you remove links from materials that you quote?

I ask because of the following example:

The research, led by Alexei Efros, associate professor of electrical engineering and computer sciences, will be presented today (Thursday, Aug. 14) at the International Conference and Exhibition on Computer Graphics and Interactive Techniques, or SIGGRAPH, in Vancouver, Canada.

“Visual data is among the biggest of Big Data,” said Efros, who is also a member of the UC Berkeley Visual Computing Lab. “We have this enormous collection of images on the Web, but much of it remains unseen by humans because it is so vast. People have called it the dark matter of the Internet. We wanted to figure out a way to quickly visualize this data by systematically ‘averaging’ the images.”

Which is a quote from: New tool makes a single picture worth a thousand – and more – images by Sarah Yang.

Those passages were reprinted by Science Daily reading:

The research, led by Alexei Efros, associate professor of electrical engineering and computer sciences, was presented Aug. 14 at the International Conference and Exhibition on Computer Graphics and Interactive Techniques, or SIGGRAPH, in Vancouver, Canada.

“Visual data is among the biggest of Big Data,” said Efros, who is also a member of the UC Berkeley Visual Computing Lab. “We have this enormous collection of images on the Web, but much of it remains unseen by humans because it is so vast. People have called it the dark matter of the Internet. We wanted to figure out a way to quickly visualize this data by systematically ‘averaging’ the images.”

Why leave out the hyperlinks for SIGGRAPH and the Visual Computing Laboratory?

Or for that matter, the link to the original paper: AverageExplorer: Interactive Exploration and Alignment of Visual Data Collections (ACM Transactions on Graphics, SIGGRAPH paper, August 2014) which appeared in the news release.

All three hyperlinks enhance your ability to navigate to more information. Isn’t navigation to more information a prime function of the WWW?

If so, we need to clue ScienceDaily and other content repackagers to include hyperlinks passed onto them, at least.

If you can’t be a value-add, at least don’t be a value-loss conduit.

TCP Stealth

Sun, 08/17/2014 - 20:31

Categories:

Topic Maps

New “TCP Stealth” tool aims to help sysadmins block spies from exploiting their systems by David Meyer.

From the post:

System administrators who aren’t down with spies commandeering their servers might want to pay attention to this one: A Friday article in German security publication Heise provided technical detail on a GCHQ program called HACIENDA, which the British spy agency apparently uses to port-scan entire countries, and the authors have come up with an Internet Engineering Task Force draft for a new technique to counter this program.

The refreshing aspect of this vulnerability is that the details are being discussed in public, as it a partial solution.

Perhaps this is a step towards transparency for cybersecurity. Keeping malicious actors and “security researchers” only in the loop hasn’t worked out so well.

Whether governments fall into “malicious actors” or “security researchers” I leave to your judgement.

Bizarre Big Data Correlations

Sun, 08/17/2014 - 20:16

Categories:

Topic Maps

Chance News 99 reported the following story:

The online lender ZestFinance Inc. found that people who fill out their loan applications using all capital letters default more often than people who use all lowercase letters, and more often still than people who use uppercase and lowercase letters correctly.

ZestFinance Chief Executive Douglas Merrill says the company looks at tens of thousands of signals when making a loan, and it doesn’t consider the capital-letter factor as significant as some other factors—such as income when linked with expenses and the local cost of living.

So while it may take capital letters into consideration when evaluating an application, it hasn’t held a loan up because of it.

Submitted by Paul Alper

If it weren’t an “online lender,” ZestFinance could take into account applications signed in crayon.

Chance News collects stories with a statistical or probability angle. Some of them can be quite amusing.

Titan 0.5 Released!

Sun, 08/17/2014 - 00:30

Categories:

Topic Maps

Titan 0.5 Released!

From the Titan documentation:

1.1. General Titan Benefits

  • Support for very large graphs. Titan graphs scale with the number of machines in the cluster.
  • Support for very many concurrent transactions and operational graph processing. Titan’s transactional capacity scales with the number of machines in the cluster and answers complex traversal queries on huge graphs in milliseconds.
  • Support for global graph analytics and batch graph processing through the Hadoop framework.
  • Support for geo, numeric range, and full text search for vertices and edges on very large graphs.
  • Native support for the popular property graph data model exposed by Blueprints.
  • Native support for the graph traversal language Gremlin.
  • Easy integration with the Rexster graph server for programming language agnostic connectivity.
  • Numerous graph-level configurations provide knobs for tuning performance.
  • Vertex-centric indices provide vertex-level querying to alleviate issues with the infamous super node problem.
  • Provides an optimized disk representation to allow for efficient use of storage and speed of access.
  • Open source under the liberal Apache 2 license.

A major milestone in the development of Titan!

If you are interested in serious graph processing, Titan is one of the systems that should be on your short list.

PS: Matthias Broecheler has posted Titan 0.5.0 GA Release, which has links to upgrade instructions and comments about a future Titan 1.0 release!

our new robo-reader overlords

Fri, 08/15/2014 - 23:18

Categories:

Topic Maps

our new robo-reader overlords by Alan Jacobs.

After you read this post by Jacobs, be sure to spend time with Flunk the robo-graders by Les Perelman (quoted by Jacobs).

Both raise the issue of what sort of writing can be taught by algorithms that have no understanding of writing?

In a very real sense, the outcome can only be writing that meets but does not exceed what has been programmed into an algorithm.

That is frightening enough for education, but if you are relying on AI or machine learning for intelligence analysis, your stakes may be far higher.

To be sure, software can recognize “send the atomic bomb triggers by Federal Express to this address….,” or at least I hope that is within the range of current software. But what if the message is: “The destroyer of worlds will arrive next week.” Alert? Yes/No? What if it was written in Sanskrit?

I think computers, along with AI and machine learning can be valuable tools, but not if they are setting the standard for review. At least if you don’t want to dumb down writing and national security intelligence to the level of an algorithm.

I first saw this in a tweet by James Schirmer.

Applauding The Ends, Not The Means

Fri, 08/15/2014 - 21:25

Categories:

Topic Maps

Microsoft scans email for child abuse images, leads to arrest‏ by Lisa Vaas.

From the post:

It’s not just Google.

Microsoft is also scanning for child-abuse images.

A recent tip-off from Microsoft to the National Center for Missing & Exploited Children (NCMEC) hotline led to the arrest on 31 July 2014 of a 20-year-old Pennsylvanian man in the US.

According to the affidavit of probable cause, posted on Smoking Gun, Tyler James Hoffman has been charged with receiving and sharing child-abuse images.

Shades of the days when Kodak would censor film submitted for development.

Lisa reviews the PhotoDNA techniques used by Microsoft and concludes:

The recent successes of PhotoDNA in leading both Microsoft and Google to ferret out child predators is a tribute to Microsoft’s development efforts in coming up with a good tool in the fight against child abuse.

In this particular instance, given this particular use of hash identifiers, it sounds as though those innocent of this particular type of crime have nothing to fear from automated email scanning.

No sane person supports child abuse so the outcome of the case doesn’t bother me.

However, the use of PhotoDNA isn’t limited to photos of abused children. The same technique could be applied to photos of police officers abusing protesters (wonder where you would find those?), etc.

Before anyone applauds Microsoft for taking the role of censor (in the Roman sense), remember that corporate policies change. The goals of email scanning may not be so agreeable tomorrow.

XPERT (Xerte Public E-learning ReposiTory)

Fri, 08/15/2014 - 17:43

Categories:

Topic Maps

XPERT (Xerte Public E-learning ReposiTory)

From the about page:

XPERT (Xerte Public E-learning ReposiTory) project is a JISC funded rapid innovation project (summer 2009) to explore the potential of delivering and supporting a distributed repository of e-learning resources created and seamlessly published through the open source e-learning development tool called Xerte Online Toolkits. The aim of XPERT is to progress the vision of a distributed architecture of e-learning resources for sharing and re-use.

Learners and educators can use XPERT to search a growing database of open learning resources suitable for students at all levels of study in a wide range of different subjects.

Creators of learning resources can also contribute to XPERT via RSS feeds created seamlessly through local installations of Xerte Online Toolkits. Xpert has been fully integrated into Xerte Online Toolkits, an open source content authoring tool from The University of Nottingham.

Other useful links:

Xerte Project Toolkits

Xerte Community.

You may want to start with the browse option because the main interface is rather stark.

The Google interface is “stark” in the same sense but Google has indexed a substantial portion of all online content. I’m not very likely to draw a blank. Xpert, with a base of 364,979 resources, the odds of my drawing a blank are far higher.

The keywords are in three distinct alphabetical segments, starting with “a” or a digit, ending and then another digit or “a” follows and end, one after the other. Hebrew and what appears to be Chinese appears at the end of the keyword list, in no particular order. I don’t know if that is an artifact of the software or of its use.

The same repeated alphabetical segments occurs in Author. Under Type there are some true types such as “color print” but the majority of the listing is file sizes in bytes. Not sure why file size would be a “type.” Institution has similar issues.

If you are looking for a volunteer opportunity, helping XPert with alphabetization would enhance the browsing experience for the resources it has collected.

I first saw this in a tweet by Graham Steel.

Photoshopping The Weather

Fri, 08/15/2014 - 15:23

Categories:

Topic Maps

Photo editing algorithm changes weather, seasons automatically

From the post:

We may not be able control the weather outside, but thanks to a new algorithm being developed by Brown University computer scientists, we can control it in photographs.

The new program enables users to change a suite of “transient attributes” of outdoor photos — the weather, time of day, season, and other features — with simple, natural language commands. To make a sunny photo rainy, for example, just input a photo and type, “more rain.” A picture taken in July can be made to look a bit more January simply by typing “more winter.” All told, the algorithm can edit photos according to 40 commonly changing outdoor attributes.

The idea behind the program is to make photo editing easy for people who might not be familiar with the ins and outs of complex photo editing software.

“It’s been a longstanding interest on mine to make image editing easier for non-experts,” said James Hays, Manning Assistant Professor of Computer Science at Brown. “Programs like Photoshop are really powerful, but you basically need to be an artist to use them. We want anybody to be able to manipulate photographs as easily as you’d manipulate text.”

A paper describing the work will be presented next week at SIGGRAPH, the world’s premier computer graphics conference. The team is continuing to refine the program, and hopes to have a consumer version of the program soon. The paper is available at http://transattr.cs.brown.edu/. Hays’s coauthors on the paper were postdoctoral researcher Pierre-Yves Laffont, and Brown graduate students Zhile Ren, Xiaofeng Tao, and Chao Qian.

For all the talk about photoshopping models, soon the Weather Channel won’t send reporters to windy, rain soaked beaches, snow bound roads, or even chasing tornadoes.

With enough information, the reporters can have weather effects around them simulated and eliminate the travel cost for such assignments.

Something to keep in mind when people claim to have “photographic” evidence. Goes double for cellphone video. A cellphone only captures the context selected by its user. A non-photographic distortion that is hard to avoid.

I first saw this in a tweet by Gregory Piatetsky.

Visualizing Open-Internet Comments

Tue, 08/12/2014 - 23:54

Categories:

Topic Maps

A Fascinating Look Inside Those 1.1 Million Open-Internet Comments by Elise Hu.

From the post:

When the Federal Communications Commission asked for public comments about the issue of keeping the Internet free and open, the response was huge. So huge, in fact, that the FCC’s platform for receiving comments twice got knocked offline because of high traffic, and the deadline was extended because of technical problems.

So what’s in those nearly 1.1 million public comments? A lot of mentions of the F word, according to a TechCrunch analysis. But now, we have a fuller picture. The San Francisco data analysis firm Quid looked beyond keywords to find the sentiment and arguments in those public comments.

Quid, as commissioned by the media and innovation funder Knight Foundation, parsed hundreds of thousands of comments, tweets and news coverage on the issue since January. The firm looked at where the comments came from and what common arguments emerged from them.

Yes, NPR twice in the same day.

When NPR has or hires talent to understand the issues, it is capable of high quality reporting.

In this particular case, clustering enables the discovery of two themes that were not part of any public PR campaign, which I would take to be genuine consumer responses.

While “lite” from a technical standpoint, the post does a good job of illustrating the value of this type of analysis.

PS: While omitted from the NPR story, Quid.

TinkerPop3 3.0.0.M1

Tue, 08/12/2014 - 23:39

Categories:

Topic Maps

TinkerPop3 3.0.0.M1 Released — A Gremlin Raga in 7/16 Time by Marko A. Rodriguez.

From the post:

TinkerPop3 3.0.0.M1 “A Gremlin Rāga in 7/16 Time” is now released and ready for use.

http://tinkerpop.com (downloads and docs)
https://github.com/tinkerpop/tinkerpop3/blob/master/CHANGELOG.asciidoc (changelog)

IMPORTANT: TinkerPop3 requires Java8.
http://www.oracle.com/technetwork/java/javase/overview/java8-2100321.html

We would like both developers and vendors to play with this release and provide feedback as we move forward towards M2, …, then GA.

  1. Is the API how you like it?
  2. Is it easy to implement the interfaces for your graph engine?
  3. Is the documentation clear?
  4. Are there VertexProgram algorithms that you would like to have?
  5. Are there Gremlin steps that you would like to have?
  6. etc…

For the above, as well as for bugs, the issue tracker is open and ready for submissions:
https://github.com/tinkerpop/tinkerpop3/issues

TinkerPop3 is the culmination of a huge effort from numerous individuals. You can see the developers and vendors that have provided their support through the years.
http://www.tinkerpop.com/docs/current/#tinkerpop-contributors
(the documentation may take time to load due to all the graphics in the single HTML)

If you haven’t looked at the TinkerPop3 docs in a while, take a quick look. Tweets on several sections have recently pointed out very nice documentation.

Functional Examples from Category Theory

Tue, 08/12/2014 - 23:24

Categories:

Topic Maps

Functional Examples from Category Theory by Alissa Pajer.

Summary:

Alissa Pajer discusses through examples how to understand and write cleaner and more maintainable functional code using the Category Theory.

You will need to either view at full screen or download the slides to see the code.

Long on category theory but short on Scala. Still, a useful video that will be worth re-watching.

The dynamics of correlated novelties

Tue, 08/12/2014 - 21:04

Categories:

Topic Maps

The dynamics of correlated novelties by F. Tria, V. Loreto, V. D. P. Servedio, and S. H. Strogatz.

Abstract:

Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya’s urn, predicts statistical laws for the rate at which novelties happen (Heaps’ law) and for the probability distribution on the space explored (Zipf’s law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.

From the introduction:

The notion that one new thing sometimes triggers another is, of course, commonsensical. But it has never been documented quantitatively, to the best of our knowledge. In the world before the Internet, our encounters with mundane novelties, and the possible correlations between them, rarely left a trace. Now, however, with the availability of extensive longitudinal records of human activity online1, it has become possible to test whether everyday novelties crop up by chance alone, or whether one truly does pave the way for another.

Steve Newcomb often talks about serendipity and topic maps. What if it is possible to engineer serendipity? That is over a large enough population, discover the subjects that are going to trigger the transition where “formerly adjacent possible becomes actualized[?].

This work is in its very early stages but its impact on information delivery/discovery may be substantial.

NPR + CIA = Credible Disinformation

Tue, 08/12/2014 - 20:46

Categories:

Topic Maps

NPR Is Laundering CIA Talking Points to Make You Scared of NSA Reporting by By Glenn Greenwald and Andrew Fishman.

From the post:

On August 1, NPR’s Morning Edition broadcast a story by NPR national security reporter Dina Temple-Raston touting explosive claims from what she called “a tech firm based in Cambridge, Massachusetts.” That firm, Recorded Future, worked together with “a cyber expert, Mario Vuksan, the CEO of ReversingLabs,” to produce a new report that purported to vindicate the repeated accusation from U.S. officials that “revelations from former NSA contract worker Edward Snowden harmed national security and allowed terrorists to develop their own countermeasures.”

The “big data firm,” reported NPR, says that it now “has tangible evidence” proving the government’s accusations. Temple-Raston’s four-minute, 12-second story devoted the first 3 minutes and 20 seconds to uncritically repeating the report’s key conclusion that ”just months after the Snowden documents were released, al-Qaeda dramatically changed the way its operatives interacted online” and, post-Snowden, “al-Qaeda didn’t just tinker at the edges of its seven-year-old encryption software; it overhauled it.” The only skepticism in the NPR report was relegated to 44 seconds at the end when she quoted security expert Bruce Schneier, who questioned the causal relationship between the Snowden disclosures and the new terrorist encryption programs, as well as the efficacy of the new encryption.

The day after that NPR report, I posted Hire Al-Qaeda Programmers, which pointed out the technical absurdity of the claims made in the NPR story. That three different organizations re-wrote security software within three to five months following the Snowden leaks. Contrary to all experience with software projects.

Greenwald follows the money to reveal that both Recorded Future and ReversingLabs are both deeply in the pockets of the CIA and exposes other issues and problems with both the Recorded Future “report” and the NPR story on the same.

We can debate why Dina Temple-Raston didn’t do a fuller investigation, express more skepticism, or ask sharper questions.

But the question that interests me is this one: Why report the story at all?

Just because Recorded Future, the CIA, or even the White House releases claims about Edward Snowden and national security isn’t a reason to repeat them. Even if they are repeated with critical analysis or following the money trail as did Greenwald.

Even superficial investigation would have revealed the only “tangible evidence” in the possession of Recorded Future is the paper on which it printed its own speculations. That should have been the end of the story.

If the story was broken by other outlets, then the NPR story is “XYZ taken in by a false story….”

Instead, we have NPR lending its credibility to a government and agencies who have virtually none at all. We are served “credible” disinformation because of its source, NPR.

The average listener isn’t going to remember the companies involved or most of the substance of the story. What they are going to remember is that they heard NPR report that Snowden’s leaks harmed national security.

Makes me wonder what other inadequately investigated stories NPR is broadcasting.

You?

PS: You could say that Temple-Raston just “forgot” or overlooked the connections Greenwald reports. Or another reporter, confronted with a similar lie, may not know of the connections. How would you avoid a similar outcome in the future?

InChI identifier

Mon, 08/11/2014 - 21:17

Categories:

Topic Maps

How the InChI identifier is used to underpin our online chemistry databases at Royal Society of Chemistry by Antony Williams.

Description:

The Royal Society of Chemistry hosts a growing collection of online chemistry content. For much of our work the InChI identifier is an important component underpinning our projects. This enables the integration of chemical compounds with our archive of scientific publications, the delivery of a reaction database containing millions of reactions as well as a chemical validation and standardization platform developed to help improve the quality of structural representations on the internet. The InChI has been a fundamental part of each of our projects and has been pivotal in our support of international projects such as the Open PHACTS semantic web project integrating chemistry and biology data and the PharmaSea project focused on identifying novel chemical components from the ocean with the intention of identifying new antibiotics. This presentation will provide an overview of the importance of InChI in the development of many of our eScience platforms and how we have used it to provide integration across hundreds of websites and chemistry databases across the web. We will discuss how we are now expanding our efforts to develop a platform encompassing efforts in Open Source Drug Discovery and the support of data management for neglected diseases.

Although I have seen more than one of Antony’s slide decks, there is information herein that bears repeating and new news as well.

InChI identifiers are chemical identifiers based on the chemical structure of a substance. They are not designed to replace current identifiers but rather to act as lynchpins that enable the mapping of other names together against a known chemical structure. (The IUPAC International Chemical Identifier (InChI))

Anthony says at slide #31 that all 21st century articles (100K) have been processed. And is not shy about pointing out known problems in existing data.

I regret not seeing the presentation but the slides left me with a distinctly positive feeling about progress in this area.

Getting Good Tip

Mon, 08/11/2014 - 20:49

Categories:

Topic Maps

I first saw:

“if you want to get good at R (or anything really) the trick is to find a reason to use it every day”

in a tweet by Neil Saunders, quoting Tony Ojeda in How to Transition from Excel to R.

That sounds more doable than saying: “I will practice R for an hour every day this week.” Some days you will and some days you won’t. But finding a reason to use R (or anything else) once a day, I suspect it will creep into your regular routine.

Enjoy!

Multiobjective Search

Mon, 08/11/2014 - 20:29

Categories:

Topic Maps

Multiobjective Search with Hipster and TinkerPop Blueprints

From the webpage:

This advanced example explains how to perform a general multiobjective search with Hipster over a property graph using the TinkerPop Blueprints API. In a multiobjective problem, instead of optimizing just a single objective function, there are many objective functions that can conflict each other. The goal then is to find all possible solutions that are nondominated, i.e., there is no other feasible solution better than the current one in some objective function without worsening some of the other objective functions.

If you don’t know Hipster:

The aim of Hipster is to provide an easy to use yet powerful and flexible type-safe Java library for heuristic search. Hipster relies on a flexible model with generic operators that allow you to reuse and change the behavior of the algorithms very easily. Algorithms are also implemented in an iterative way, avoiding recursion. This has many benefits: full control over the search, access to the internals at runtime or a better and clear scale-out for large search spaces using the heap memory.

You can use Hipster to solve from simple graph search problems to more advanced state-space search problems where the state space is complex and weights are not just double values but custom defined costs.

I can’t help but hear “multiobjective search” in the the context of a document search where documents may or may not match multiple terms in a search request.

But that hearing is wrong because a graph can be more granular than a document and possess multiple ways to satisfy a particular objective. My intuition is that documents satisfy search requests only in a binary sense, yes or not. Yes?

Good way to get involved with Tinkerpop Blueprints.

How to Transition from Excel to R

Mon, 08/11/2014 - 19:30

Categories:

Topic Maps

How to Transition from Excel to R: An Intro to R for Microsoft Excel Users by Tony Ojeda.

From the post:

In today’s increasingly data-driven world, business people are constantly talking about how they want more powerful and flexible analytical tools, but are usually intimidated by the programming knowledge these tools require and the learning curve they must overcome just to be able to reproduce what they already know how to do in the programs they’ve become accustomed to using. For most business people, the go-to tool for doing anything analytical is Microsoft Excel.

If you’re an Excel user and you’re scared of diving into R, you’re in luck. I’m here to slay those fears! With this post, I’ll provide you with the resources and examples you need to get up to speed doing some of the basic things you’re used to doing in Excel in R. I’m going to spare you the countless hours I spent researching how to do this stuff when I first started so that you feel comfortable enough to continue using R and learning about its more sophisticated capabilities.

Excited? Let’s jump in!

Not a complete transition but enough to give you a taste of R that will leave you wanting more.

You will likely find R is better for some tasks and that you prefer Excel for others. Why not have both in your toolkit?

Patent Fraud, As In Patent Office Fraud

Mon, 08/11/2014 - 19:09

Categories:

Topic Maps

Patent Office staff engaged in fraud and rushed exams, report says by Jeff John Roberts.

From the post:

…One version of the report also flags a culture of “end-loading” in which examiners “can go from unacceptable performance to award levels in one bi-week by doing 500% to more than 1000% of their production goal.”…

See Jeff’s post for other details and resources.

Assuming the records for patent examiners can be pried loose from the Patent Office, this would make a great topic map project. Associate the 500% periods with specific patents and further litigation on those patents, to create a resource for further attacks on patents approved by a particular examiner.

By the time a gravy train like patent examining makes the news, you know the train has already left the station.

On the up side, perhaps Congress will re-establish the Patent Office and prohibit any prior staff, contractors, etc. from working at the new Patent Office. The new Patent Office can adopt rules designed to enable innovation but also tracking prior innovation effectively. Present Patent Office goals have little to do with either of those goals.