Hacking Patents

Last week at the Media Lab, we had the pleasure of hosting USPTO Director Dave Kappos and several members of the PTO's senior staff, to brainstorm ways that we might use technology in creative ways to help the patent office work better, and to help the patent system work better in general.  The meeting was organized by Beth Noveck, who is just beginning her stint as a visiting professor at Lab -- among other things, Beth and her former student Chris Wong are the ones behind the Peer to Patent project, which crowdsources aspects of the patent review process.  The goal of the day was to come away from the meeting with a handful of concrete projects that we (at the Media Lab, and at other institutions) could start to hack on. As most people know, there are a lot of issues with the patent system, especially with software patents.  Too many meaningless software patents get issued, and too much needless litigation ensues.  It's a drag on the industry, and it's particularly tough on new companies.  Just this week, EFF launched a campaign to address exactly these issues. While EFF is pursuing legislative means to address the problem, the focus of this workshop was to look at what could be done in the current context, using tech as a lever.  There are a whole lot of opportunities to bring more and better information into the process, to reduce the burden on patent examiners, to increase the quality of approved patents, and to make better use of the patent corpus as a teaching & learning tool.  And in general, I'm a fan of finding hacks to hard problems (like Twitter's recent "patent hack") First: in prepping for the meeting, I reached out to Mike Masnick of Techdirt for his suggestion on what could be built.  I'll just paste his reply directly here, since there are good ideas in there:

  • Much better search tools (USPTO website's bad, Google's patent search is just so-so).

  • A "prior art" tool, which basically takes the patent database, and lets people annotate it with examples of prior art

  • A system that indicates how often a patent has been used in litigation (i.e., a warning system of a patent that is often been used for trolling).

  • A "dashboard" of sorts that details claims that are rejected on re-exam (something like 80% of patents that get re-examined have claims rejected -- which is scary when you think about it).  Such a dashboard should highlight patents that were changed, how frequently such re-exams lead to changes, and perhaps even "scores" patent examiners for how often claims they approved get rejected on re-exam.

  • A better and more public version of the PAIR system, including one that shows just how many "final rejections" a patent received before it was approved (you'd be amazed)

  • An "expert" database of techies willing to testify during patent trials on the obviousness of a patent or on prior art -- making it easier for defendants to bolster their case with evidence of invalidity of patents.

  • I don't know if there's any "technology" here, but something that tries to unwind the shell companies that make up most patent trolls to find out who's really behind many patent trolls (seriously, a bunch of them are so hidden you have no clue who actually owns the patent).

Coming into the meeting the PTO identified the following three areas of interest:

  • Quality of Information: how can applicants and examiners access more and better data sources, and have data more interlinked throughout the lifecycle.

  • Better analysis of applications: cross-referencing, crowd-sourcing, automated analysis.

  • Customer interaction: the patent application process involves a tremendous amount of back-and-forth

Without going through the whole list of ideas from the workshop, here are the ideas/topics/opportunities that felt most promising to me:

  • Reframing patents as building blocks -- a primary purpose of patents is to bring understanding of inventions into the public realm.  However, in practice this doesn't work too well.  Making the patent database more linkable and searchable would be a first step.  Requiring diagrams to be submitted in an open digital format, and/or requiring photos to be photographed from pre-defined angles would help.  Another idea was map products in the marketplace back to the underlying patents, as sort of a "view source" for products -- I love the idea of that.  In general, there was a consensus that there's a need and an opportunity to reframe the patent corpus as a basis for innovation, not just as a means of protecting individual rights.

  • Improving the pre-submission process -- a "turbotax for patents", i.e., a web app (or an ecosystem of apps built on top of a PTO API) that would walk filers through the process; machine-learning tools that could analyse applications based on the patent database.

  • Incentives -- we spent a fair amount of time talking about incentives -- in particular, how to encourage participation in finding prior art (perhaps by recruiting filers to find prior art on other patents, offering, say, speedier review of their application as a reward) , and also opportunities to encourage filers to leverage peer-review more effectively (say, by offering speedier review if they agree to do so).  Things like building a reputation system which more clearly identifies "awesome" patents (those that have been cited many times), and lame ones (those that have been repeatedly and successfully challenged) and "rogues" -- trolls and others who are abusing the system.

  • Peer review and engaging existing communities of expertise.  There was talk about how the PTO might engage with communities like the ones around StackExchange, which are already bringing tremendous community resources to bear towards answering technical questions.  A sticky point in the application process is *when* peer review can happen (currently only after the point of public disclosure -- 18 months after initial application) -- so one challenge is thinking about systems might open up opportunities for more peer review and/or crowdsourced assistance of some kind earlier int he process.  Also, Sandy Pentland from the Media Lab stressed that we should avoid thinking about how to "bring more experts into the process" and instead focus on bringing more *information* to the process -- and implement peer reviewing and network-oriented systems to assess quality.

Here's what ended up on the whiteboard (which of course misses most of the color from the conversation but gets some of the ideas across):

I'll post an update once more concrete plans emerge and things start to happen. The plan is for several groups at the lab to peel off and start working on pieces of this.  In the meantime, if you have any suggestions or thoughts, post them in the comments here, or tweet at: Beth Noveck, Chris WongMargot Kaminski, Jason Schultz, Joi Ito, or me.  

Loading...
highlight
Collect this post to permanently own it.
The Slow Hunch by Nick Grossman logo
Subscribe to The Slow Hunch by Nick Grossman and never miss a post.
#beth-noveck#dave-kappos#media-lab#patents#strategery
  • Loading comments...