Solving the NPM Problem at Scale

If you haven’t heard, Azer Koculu unpublished a bunch of his modules as protest against behavior by the company that backs NPM. This crashed the NPM ecosystem with hundreds of popular project suddenly unable to build. Now there’s lots of talk about what to do. PGP signatures? Always pinning? Permacaching with IPFS?  I think Azer's goal was achieved. We are now actually talking about how brittle the system. The conversation is happening. This is good.

However, I fear the solutions being proposed are just bandaids. We are stuck between a rock and a hard place. (explanation for my non-english speaking friends).  We take on huge risks whenever we use 3rd party dependencies. One can reduce the risk by more carefully monitoring the imports to detect malicious behavior, but this only scales so far. As our software gets more complex we have more dependencies. Not using dependencies simply isn’t an option, because then we couldn’t build software as big as we need. Mandating signatures or permacaching with pinning helps, but only to a point. They don’t “solve" the problem.  

I think we need to look at this from a different perspective. Rather than lots of micro-fixes to address parts of the problem, we must to take a holistic view. Today’s systems for 3rd party dependencies doesn't work because it rely on human scale efforts:  reputation of the maintainer,  manual star rankings, etc. We can patch the current system, and we absolutely should, but in the future we need something that will work at machine scale, not human scale.  Let us consider two other problems that were solved only by changing the nature of the solution:  web search and email spam.

Machine Scale

Both web search and spam are harder problems than dependency management.  They have not only the problem of sifting the wheat from the chaff, but must fight actively malicious actors attempting to game the system. Imagine if not only could an NPM module break at any time, but every hundredth module was trying to create an exploit in your program. The current system would break very quickly.  

In the old days we tried to fix spam and search with human scale solutions: custom rules to flag an email as spam, manually organizing pages into directories.  We all know how this turned out. It didn’t scale.  Google came in with a search engine that ranks pages based on who links to them organically, creating a rating system for the entire web.  Spam is detected through automated Baysian filters, not manual tagging.  Both solutions changed the nature of the game

Toward an Automated Solution

So how could we apply this to dependency management?  We need a different sort of tool to tackle the problem as a whole, not just piecemeal. This tool would have to map the full dependency graph of your project and provide you with a risk rating. This rating would be based on analysis of thousands of open source projects dependency graphs, including their history. We can teach the system what good code looks like. Once the graph is analyzed we could do lots of things:

Run automated code analysis for potentially malicious behavior. as we all know, it’s impossible to fully analyze turing complete code. This is the halting problem, but we could make a good effort that would at least catch some things. Along the way we can catch the endless possible bugs in this software. 

Rank libraries based on how much they are used, not how much they are starred.

If the graph of your project is very deep or with too many cross cutting dependencies then it’s a sign you need to refactor.  Having automated tools to help us see this information is key to making refactoring decisions based on more than a gut feeling.

Provide alternatives. If package X looks fishy because few people use it, the system could suggest a replacement based on what others are using in their projects.  Scanning the git history of other projects would prove enlightening. did someone else use lib X and later replace it with lib Y?

Scan the history of a project looking for unusual changes. was project X fine until last week when there was a sudden big change? that might be worth looking at closer.

Embed this tool in your IDE to give you options when you first import the library.  Call by Meaning (pdf) shows that this may actually be feasible

We Must Do Better

I’m glad we are talking about the flaws in NPM. I love NPM. But it’s greatest feature, that it made creating modules easy, is it’s biggest curse, we now make more modules than ever. This forces us to really look at the underlying problem and tackle it wholesale, not piecemeal. My solution is but one possible answer.  I’m sure there are other ways we could solve the problem. The key is to look beyond human scale solutions to ways that harness the power of this crazy cloud beast we’ve created called The Internet.

Talk to me about it on Twitter

Posted March 24th, 2016

Tagged: npm opensource programming