Josh On DesignArt, Design, and Usability for Software EngineersTue Dec 16 2014 21:52:36 GMT+0000 (UTC)\nGetting Started with NodeJS and ThrustI’ve used a lot of cross platform desktop toolkits over the years. I’ve even built some when I worked on Swing and JavaFX at Sun, and I continue development of Amino, an OpenGL toolkit for JavaScript. I know far more about toolkits than I wish. You would think the hardest part of making a good toolkit is the graphics or the widgets or the API. Nope. It’s deployment. Client side Java failed because of deployment. How to actually get the code running on the end user’s computer 100% reliably, no matter what system they have. Deployment on desktop is hard. Perhaps some web technology can help. <p> </p> <h3 id="id40168">Thrust</h3> <p> Today we’re going to play with a new toolkit called <a href='https://github.com/breach/thrust'>Thrust</a>. Thrust is an embeddable web view based on Chromium, similar to Atom-Shell or Node-webkit, but with one big difference. The Chromium renderer runs in a separate process that your app communicates with over a simple JSON based RPC pipe. This one architectural decision makes Thrust far more flexible and reliable. </p> <p> Since the actual API is over a local connection instead of C bindings, you can use Thrust with the language of your choosing. Bindings already exist for NodeJS, Go, Scala, and Python. The bindings just speak the RPC protocol so you could roll your own bindings if you want. This split makes Thrust far more reliable and flexible than previous Webkit embedding efforts. Though it’s still buggy, I’ve already had far more success with Thrust than I did with Atom-Shell. </p> <p> For this tutorial I’m going to show you how to use Thrust with NodeJS, but the same principles apply to other language bindings. This tutorial assumes you already have node installed and a text editor available. </p> <p> </p> <p> </p> <h3 id="id39123">A simple app</h3> <p> First create a new directory and node project. </p> <pre><code>mkdir thrust_tutorial cd thrust_tutorial npm init</code></pre> Accept all of the defaults for <code>npm init</code>. <p> Now create a minimal <code>start.html</code> page that looks like this. </p> <pre><code>&lt;html> &lt;body> &lt;h1>Greetings, Earthling&lt;/h1> &lt;/body> &lt;/html></code></pre> <p> Create another file called <code>start.js</code> and paste this in it: </p> <pre><code>var thrust = require('node-thrust'); var path = require('path'); thrust(function(err, api) { var url = 'file://'+path.resolve(__dirname, 'start.html'); var window = api.window({ root_url: url, size: { width: 640, height: 480, } }); window.show(); window.focus(); });</code></pre> <p> This launches Thrust with the <code>start.html</code> file in it. Notice that you have to use an absolute URL with a <code>file:</code> protocol because Thrust acts like a regular web browser. It needs real URLs not just file paths. </p> <p> </p> <h3 id="id92972">Installing Thrust</h3> <p> Now install Thrust and save it to the package.json file. The node bindings will fetch the correct binary parts for your platform automatically. </p> <pre><code>npm install --save node-thrust</code></pre> <p> </p> <p> Now run it! </p> <pre><code>node start.js</code></pre> <p> You should see something like this: </p> <p> <img src='http://joshondesign.com/images/97086_ThrustShellScreenSnapz001.png' alt='text'/> </p> <p> A real local app in just a few lines. Not bad. If your app has no need for native access (loading files, etc.) then you can stop right now. You have a local page up and running. It can load JavaScript from remote servers, though I’d copy them locally for offline usage. </p> <p> However, you probably want to do something more than . The advantage of Node is the amazing ecosystem of modules. My personal use case is an Arduino IDE. I want the Node side for compiling code and using the serial port. The web side is for editing code and debugging. That means the webpage side of my app needs to talk to the node side. </p> <p> </p> <h3 id="id46910">Message Passing</h3> <p> Thrust defines a simple message passing protocol between the two halves of the application. This is mostly hidden by the language binding. The node function ‘window.focus()” actually becomes a message sent from the Node side to the Chromium side over an internal pipe. We don’t need to care about how it works, but we do need to pass messages back and forth. </p> <p> On the browser side add this code to <code>start.html</code> to send a message using the <code>THRUST.remote</code> object like this: </p> <pre><code>&lt;script type='text/javascript'> THRUST.remote.listen(function(msg) { console.log("got back a message " + JSON.stringify(msg)); }); THRUST.remote.send({message:"I am going to solve all the world's energy problems."}); &lt;/script></code></pre> <p> Then receive the message and respond on the Node side with this code: </p> <pre><code> window.on('remote', function(evt) { console.log("got a message " + JSON.stringify(evt)); window.remote({message:"By blowing it up?"}); });</code></pre> <p> The messages may be any Javascript object that can be serialized as JSON, so you can't pass functions back and forth, just data. </p> <p> If you run this code you'll see a bunch of debugging information on the command line including the <code>console.log</code> output. </p> <pre><code>[55723:1216/141201:INFO:remote_bindings.cc(96)] REMOTE_BINDINGS: SendMessage got a message {"message":{"message":"I am going to solve all the world's energy problems."}} [55721:1216/141202:INFO:api.cc(92)] [API] CALL: 1 remote [55721:1216/141202:INFO:thrust_window_binding.cc(94)] ThrustWindow call [remote] [55721:1216/141202:INFO:CONSOLE(7)] "got back a message {"message":"By blowing it up?"}", source: file:///Users/josh/projects/thrust_tutorial/start.html (7)</code></pre> <p> Notice that both ends of the communication are here; the node and html sides. Thrust automatically redirects console.log from the html side to standard out. I did notice, however, that it doesn't handle the multiple arguments form of console.log, which is why I use <code>JSON.stringify()</code>. Unlike in a browser, doing <code>console.log("some object",obj)</code> would result in only the some object text, not the structure of the actual object. </p> <p> </p> <p> Now that the UI can talk to node, it can do almost anything. Save files, talk to databases, poll joysticks, or play music. Let’s build a quick text editor. </p> <p> </p> <h3 id="id77566">Building a text editor</h3> <p> </p> <p> Create a file called <code>editor.html</code>. </p> <pre><code>&lt;html> &lt;head> &lt;script src="http://cdnjs.cloudflare.com/ajax/libs/ace/1.1.3/ace.js" type="text/javascript" charset="utf-8">&lt;/script> &lt;script src='http://code.jquery.com/jquery-2.1.1.js'>&lt;/script> &lt;style type="text/css" media="screen"> #editor { position: absolute; top: 30; right: 0; bottom: 0; left: 0; } &lt;/style> &lt;/head> &lt;body> &lt;button id='save'>Save&lt;/button> &lt;div id="editor">function foo(items) { var x = "All this is syntax highlighted"; return x; }&lt;/div> &lt;script> var editor = ace.edit("editor"); editor.setTheme("ace/theme/monokai"); editor.getSession().setMode("ace/mode/javascript"); $('#save').click(function() { THRUST.remote.send({action:'save', content: editor.getValue()}); }); &lt;/script> &lt;/body> &lt;/html></code></pre> <p> This page initializes the editor and creates a handler for the save button. When the user presses save it will send the editor contents to the node side. Note the <code>#editor</code> css to give it a size. Without a size an Ace editor will shrink to 0. </p> <p> </p> <p> This is the new Node side code, <code>editor.js</code> </p> <pre><code>var fs = require('fs'); var thrust = require('node-thrust'); thrust(function(err, api) { var url = 'file://'+require('path').resolve(__dirname, 'editor.html'); var window = api.window({ root_url: url, size: { width: 640, height: 480, } }); window.show(); window.focus(); window.on('remote',function(evt) { console.log("got a message" + JSON.stringify(evt)); if(evt.message.action == 'save') return saveFile(evt.message); }); }); function saveFile(msg) { fs.writeFileSync('editor.txt',msg.content); console.log("saved to editor.txt"); }</code></pre> <p> </p> <p> Run this with <code>node editor.js</code> and you will see something like this: </p> <p> </p> <p> <img src='http://joshondesign.com/images/36427_ThrustShellScreenSnapz002.png' alt='text'/> </p> <p> Sweet. A real text editor that can save files. </p> <p> </p> <p> </p> <h3 id="id20635">Things I haven't covered </h3> <p> You can control native menus with Thrust. On Windows or Linux this should be done in app view itself with the various CSS toolkits. On Mac (or Linux with Unity) you will want to use the real menubar. You can do this with the api documented <a href='https://github.com/breach/thrust/blob/master/docs/api/menu.md'>here</a>. It's a pretty simple API but I want to mention something that might bite you. Menus in the menubar will in the order you create them, not the order you add them to the menubar. </p> <p> </p> <p> Another thing I didn’t cover, since I’m new to it myself, is <a href='https://github.com/breach/thrust/blob/master/docs/api/webview.md'>webviews</a>. Thrust lets you embed a second webpage inside the first, similar to an iFrame but with stricter permissions. This webview is very useful is you need to load untrusted content, such as an RSS reader might do. The webvew is encapsulated so you can run code that might crash in it. This would be useful if you were making, say, an IDE or application editor that must repeatedly run some (possibly buggy) code and then throw it away. I’ll do a future installment on web views. </p> <p> I also didn't cover is packaging. Thrust doesn’t handle packaging with installers. It gives you an executable and some libraries that you can run from a command line, but you still must build a native app bundle / deb / rpm / msi for each platform you want to support if you want a nicer experience. Fortunately there are other tools to help you do this like <a href='http://www.jrsoftware.org/isinfo.php'>InnoSetup</a> and <a href='https://github.com/jordansissel/fpm'>FPM</a> . </p> http://joshondesign.com/2014/12/16/nodethrusttut\nBeautiful Lego 2: DarkThe follow up to last year’s Beautiful Lego, Mike Doyle brings us back for more of the best Lego models from around the world. This time the theme is Dark. As the book explains it: “destructive objects, like warships and mecha, and dangerous and creepy animals… dark fantasies of dragons and zombies and spooks” I like the concept of a theme as it helps focus the book. The theme of Dark was stretched a bit to include banks and cigarettes, and vocaloids (mechanical japanese pop-stars), but it’s still 300+ gorgeous pages of the world’s best Lego art. Beautiful Lego 2 is filled to the brim with Zerg like insect hordes, a <b>lot</b> of Krakens, and some of the cutest mechs you’ve ever seen. <p> </p> <p> </p> <p> </p> <p> <img src='http://joshondesign.com/images/56563_blego2_022-023.png' alt='text'/> </p> <p> Unlike the previous book, Beautiful Lego 2 is a hardcover. I guess the first book was popular enough that No Starch Press could really invest in this one, and it shows. It’s a thick book with proper stitching, a dust jacket, and quality paper. The book lays nicely when open like a good library edition would. This is a book that will still look new in a few decades. </p> <p> Beautiful Lego 2 is a true picture book, and well worth the price for the Lego fan in your family. I know you can get an electronic edition, but this is the sort that lets us know why physical books still exist. BTW. you can get 30% off right now with the coupon code SHADOWS . </p> <p> Buy it now at <a href='http://www.nostarch.com/beautifullego2'>NoStarch.com</a> </p> http://joshondesign.com/2014/12/13/beautifullego2\nAmino: Now with 33% less C++My hatred of C and C++ is world renown, or at least it should be. It's not that I hate the languages themselves, but the ancient build chain. A hack of compilers and #defines that have to be modified for every platform. Oh, and segfaults and memory leaks. The usual. Unfortunately, if you want to write fast graphics code you're pretty much going to be stuck with C or C++, and that's where Amino comes in. <p> <a href='https://github.com/joshmarinacci/aminogfx'>Amino</a>, my JavaScript library for OpenGL graphics on the Raspberry Pi (and other platforms), uses C++ code underneath. NodeJS binds to C++ fairly easily so I can hide the nasty parts, but the nasty is still there. </p> <p> As the Amino codebase has developed the C++ bindings have gotten more and more complicated. This has caused a few problems. </p> <p> </p> <h3 id="id94958">3rd Party Libraries</h3> <p> First, Amino depends on several 3rd party libraries; libraries the user must already have installed. I can pretty much assume that OpenGL is installed, but most users don't have FreeType, libpng,or libjpeg. Even worse, most don't have GCC installed. While I don't have a solution for the GCC problem yet, I want to get rid of the libPNG and jpeg dependencies. I don't use most of what the libraries offer and they represent just one more thing to go wrong. Furthermore, there's no point is dealing with file paths on the C++ side when Node can work with files and streams very easily. </p> <p> So, I ripped that code completely out. Now the native side can turn a Node buffer into a texture, regardless of where that buffer came from. Jpeg, PNG, or in-memory data are all treated the same. To decompress JPEG and PNGs I found two image libraries, <a href='http://keyj.emphy.de/nanojpeg/'>NanoJPEG</a> and <a href='https://github.com/elanthis/upng'>uPNG</a>, (each a single file!), to get the job done with minimal fuss. This code is now part of Amino so we have fewer dependencies and more flexibility. </p> <p> I might move to pure JS for image decoding in the future, as I'm doing for my <a href='https://github.com/joshmarinacci/node-pureimage'>PureImage library</a> but that might be very slow on the PI so we'll stick with native for now. </p> <p> </p> <h3 id="id37578">Shaders Debugging</h3> <p> GPU Shaders are powerful but hard to debug on a single platform, much less multiple platforms. As the shaders have developed I've found more differences between the Raspberry Pi and the Mac, even when I use ES 2.0 mode on the Mac. JavaScript is easier to change and debug than C++ (and faster to compile on the Pi), so I ripped out all of the shader init code and rewrote it in JavaScript. </p> <p> The new JS code initializes the shaders from files on disk instead of inline strings, as before. This means I don't have to recompile the C++ side to make changes. This also means I can change the init code on a per platform basis at runtime rather than #defines in C++ code. For speed reasons the drawing code which <b>uses</b> the shaders is still in C++, but at least all of the nasty init code is in easier to maintain JS. </p> <p> </p> <h3 id="id91273">Input Events</h3> <p> Managing input events across platforms is a huge pain. The key codes vary by keyboard and particular locale. Further complicating matters GLFW, the Raspbian Linux Kernel, and the web browser also use different values for different keys, as well as mouse and scroll events. Over the years I've built key munging utilities over and over. </p> <p> To solve this problem I started moving Amino's keyboard code into a new Node module: <a href='https://github.com/joshmarinacci/inputevents'>inputevents</a>. It does not depend on Amino and will, eventually, be usable in plain browser code as well as on Mac, Raspberry PI, and wherever else we need it to go. Eventually it will support platform specific <a href='http://en.wikipedia.org/wiki/Input_method'>IMEs</a> but that's a ways down the road. </p> <p> </p> <h3 id="id27324">Random Fixes and Improvements</h3> <p> Since I was in the code anyway, I fixed some alpha issues with dispmanx on the Pi, made opacity animatable, added more unit tests, turned clipping back on, built a new PixelView retained buffer for setting pixels under OpenGL (ex: software generation of fractals), and started using pre-node-gyp to make building the native parts easier. </p> <p> I've also started on a new rich text editor toolkit for both Canvas and Amino. It's extremely beta right now, so don't use it yet unless you like 3rd degree burns. </p> <p> That's it for now. <a href='https://github.com/joshmarinacci/aminogfx'>Enjoy, folks.</a> </p> <p> Carthage and C++ must be destroyed. </p> http://joshondesign.com/2014/11/18/aminolesscpp\nMulti-threaded fractals with Amino and NodeJSI recently added the ability to set individual pixels in Amino, my Node JS based OpenGL scene graph for the Raspberry Pi. To test it out I thought I'd write a simple Mandlebrot generator. The challenge with CPU intensive work is that Node only has one thread. If you block that thread your UI stops. Dead. To solve this we need a background processing solution. <p> </p> <h3 id="id8916">A Simple Background Processing Framework</h3> <p> While there are true threading libraries for Node, the simplest way to put something into the background is to start another Node process. It may seem like starting a process is heavyweight compared to a thread in other languages, but if you are doing something CPU intensive the cost of the exec() call is tiny compared to the rest of the work you are doing. It will be lost in the noise. </p> <p> To be really useful, we don't want to just start a child process, but actually communicate with it to give it work. The <code>childprocess</code> module makes this very easy. <code>childprocess.fork()</code> takes the path to another script file and returns an event emitter. We can send messages to the child through this emitter and listen for responses. Here's a simple class I created called <code>Workman</code> to manage the process. </p> <pre><code>var Workman = { count: 4, chs:[], init: function(chpath, cb, count) { if(typeof count == 'number') this.count = count; console.log("using thread count", this.count); for(var i=0; i&lt;this.count; i++) { this.chs[i] = child.fork(chpath); this.chs[i].on('message',cb); } }, sendcount:0, sendWork: function(msg) { this.chs[this.sendcount%this.chs.length].send(msg); this.sendcount++; } }</code></pre> <p> Workman creates <code>count</code> child processes, then saves them in the <code>chs</code> array. When you want to send some work to it, call the <code>sendWork</code> function. This will send the message to one of the children, round robin style. </p> <p> Whenever a child sends an event back, the event will be handed to the callback passed to the <code>workman.init()</code> function. </p> <p> Now that we can talk to the child processes it's time to do some drawing. </p> <p> </p> <h3 id="id30410">Parent Process</h3> <p> This is the code to actually talk to the screen. First the setup. <code>pv</code> is a new PixelView object. A PixelView is like an image view, but you can set pixel values directly instead of using a texture from disk. <code>w</code> and <code>h</code> are the width and height of the texture in the GPU. </p> <pre><code>var pv = new amino.PixelView().pw(500).w(500).ph(500).h(500); root.add(pv); stage.setRoot(root); var w = pv.pw(); var h = pv.ph();</code></pre> <p> Now let's create a Workman to schedule the work. We will submit work for each row of the image. When work comes back from the child process the <code>handleRow</code> function will handle it. </p> <pre><code>var workman = Workman; workman.init(__dirname+'/mandle_child.js',handleRow); var scale = 0.01; for(var y=0; y&lt;h; y++) { var py = (y-h/2)*scale; var msg = { x0:(-w/2)*scale, x1:(+w/2)*scale, y:py, iw: w, iy:y, iter:100, }; workman.sendWork(msg); }</code></pre> Notice that the work message must contain all of the information the child needs to do it's work: the start and end values in the x direction, the y value, the length of the row, the index of the row, and the number of iterations to do (more iterations makes the fractal more accurate but slower). This message is the only communication the child has from the outside world. Unlike with threads, child processes do not share memory with the parent. <p> Here is the <code>handleRow</code> function which receives the completed work (an array of iteration counts) and draws the row into the PixelView. After updating the pixels we have to call updateTexture to push the changes to the GPU and screen. lookupColor converts the iteration counts into a color using a look up table. </p> <pre><code>function handleRow(m) { var y = m.iy; for(var x=0; x&lt;m.row.length; x++) { var c = lookupColor(m.row[x]); pv.setPixel(x,y,c[0],c[1],c[2],255); } pv.updateTexture(); }</code></pre><pre><code>var lut = []; for(var i=0; i&lt;10; i++) { var s = (255/10)*i; lut.push([0,s,s]); } function lookupColor(iter) { return lut[iter%lut.length]; }</code></pre> <p> </p> <h3 id="id82804">Child Process</h3> <p> Now let's look at the child process. This is where the actual fractal calculations are done. It's your basic Mandelbrot. For each pixel in the row it calculates a complex number until the value exceeds 2 or it hits the maximum number of iterations. Then it stores the iteration count for that pixel in the <code>row</code> array. </p> <pre><code>function lerp(a,b,t) { return a + t*(b-a); } process.on('message', function(m) { var row = []; for(var i=0; i&lt;m.iw; i++) { var x0 = lerp(m.x0, m.x1, i/m.iw); var y0 = m.y; var x = 0.0; var y = 0.0; var iteration = 0; var max_iteration = m.iter; while(x*x + y*y &lt; 2*2 && iteration &lt; max_iteration) { xtemp = x*x - y*y + x0; y = 2*x*y + y0; x = xtemp; iteration = iteration + 1; } row[i] = iteration; } process.send({row:row,iw:m.iw,iy:m.iy}); })</code></pre> <p> After every pixel in the row is complete it sends the row back to the parent. Notice that it also sends an iy value. Since the children could complete their work in any order (if one row happens to take longer than another), the iy value lets the parent know which row this result is for so that it will be drawn in the right place. </p> <p> Also notice that all of the calculation happens in the <code>message</code> event handler. This will be called every time the parent process sends some work. The child process just waits for the next message. The beauty of this scheme is that Node handles any overflow or underflow of the work queue. If the parent sends a lot of work requests at once they will stay in the queue until the child takes them out. If there is no work then the child will automatically wait until there is. Easy-peasy. </p> <p> Here's what it looks like running on my Mac. Yes, Amino runs on Mac as well as Linux. I mainly talk about the Raspberry Pi because that's Amino's sweet spot, but it will run on almost anything. I chose Mac for this demo simply because I've got 4 cores there and only 1 on my Raspberry Pi. It just looks cooler to have for bars spiking up. :) </p> <p> <img src='http://joshondesign.com/images/41215_iTermScreenSnapz017.png' alt='text'/> </p> <p> </p> <p> This code is now in the <a href='https://github.com/joshmarinacci/aminogfx'>aminogfx</a> repository under <code>demos/pixels/mandle.js</code>. </p> http://joshondesign.com/2014/10/28/aminofractal\nThoughts on APL and Program NotationA post about <a href='http://archive.vector.org.uk/art10501320 '>Arthur Whitney and kOS</a> made the rounds a few days ago. It concerns a text editor Arthur made with four lines of K code, and a complete operating system he’s working on. These were all built in <a href='http://tinyurl.com/ybw4rxn'>K</a>, a vector oriented programming language derived from <a href='http://tinyurl.com/27ttcy'>APL</a>. This reminded me that I really need to look at APL after all of the language ranting I’ve done recently. <p> Note: For the purposes of this post I’m lumping K, J, and the other APL derived languages in with APL itself, much as I’d refer to Scheme or Clojure as Lisps. </p> <p> After reading up, I’m quite impressed with APL. I’ve always heard it can do complex tasks in a fraction of the code as other languages, and be super fast. It turns out this is very true. Bernard Legrand's <a href='http://archive.vector.org.uk/art10011550'>APL – a Glimpse of Heaven</a> provides a great overview of the language and why it’s interesting. </p> <p> APL is not without it’s problems, however. The syntax is very hard to read. I’m sure it becomes easier once you get used to it, but I still spent a lot more time analyzing a single line of code than I would in any another language. </p> <p> APL is fast and compact for some tasks, but not others. Fundamentally it’s a bunch of operations that work on arrays. If your problem can be phrased in terms of array operations then this is awesome. If it can’t then you start fighting the language and eventually it bites you. </p> <p> I found anything with control structures to be cumbersome. This isn’t to say that APL can’t do things that require an <code>if</code> statement, but you don’t get the benefits. <a href='http://aplwiki.com/Studio/ConvexHull '>This code to compute a convex hull</a>, for example, seems about as long as it would be in a more traditional language. With a factor of 2, at least. It doesn’t benefit much from APL’s strengths. </p> <p> Another challenge is that the official syntax uses non-ASCII characters. I actually don’t see this as a problem. We are a decade and a half into the 21st century and can deal with non-ASCII characters quite easily. The challenge is that the symbols themselves to most people. I didn’t find it hard to pick up the basics after reading a half hour tutorial, so I think the real problem is that the syntax scares programmers away before they ever try it. </p> <p> I also think enthusiasts focus on how much better APL is than other languages, rather than simply showing someone why they should spend the time to learn it. They need to show what it can <b>do</b> that is also <b>practical</b>. While it’s cool to be able to calculate all of the primes from 1 to N in just a few characters, that isn’t going to sell most developers because that’s not a task they actually need to accomplish very often. </p> <p> APL seems ideal for solving mathematical problems, or at least a good subset of them. The problem for APL is that Mathematica, MathLab, and various other tools have sprung up to do that better. </p> <p> Much like Lisp, APL seems stuck between the past and the future. The things it’s really good at it is too general for. More recent specialized tools to the job better. APL isn't general enough to be good as a general purpose language. And many general purpose languages have added array processing support (often through libraries) that make them good enough for the things APL is good at. Java 8 streams and lambda functions, for example. Thus it remains stuck in a few niches like high speed finance. This is not a bad niche to be in (highly profitable, I’m sure) but APL will never become widely used. </p> <p> That said, I really like APL for the things it’s good at. I wish APL could be embedded in a more general purpose language, much like regular expressions are embedded in JavaScript. I love the concept of a small number of functions that can be combined to do amazing things with arrays. This is the most important part of APL &mdash; for me at least &mdash; but it’s hidden behind a difficult notation. </p> <p> I buy the argument that any notation is hard to understand until you learn it, and with learning comes power. Certainly this is true for reading prose. </p> <p> Humans are good pattern recognizers. We don’t read by parsing letters. Only children just learning to read go letter by letter. The letters form patterns, called words, that our brains recognize in their entirety. After a while children's brains pick up the patterns and process them whole. In fact our brains are <b>so</b> good at picking up patterns that we can read most English words with all of the letters scrambled <a href='http://joi.ito.com/weblog/2003/09/14/ordering-of-let.html'>as long as the first and last letters are correct</a>. </p> <p> I’m sure this principle of pattern recognition applies to an experienced APL programmer as well. They can probably look at this </p> <pre><code>x[⍋x←6?40]</code></pre> <p> and think: <code>pick six random numbers from 1 to 40 and return them in ascending order</code>. </p> <p> After a time this mental processing would become natural. However, much like with writing, code needs spacing and punctuation to help the symbolic "letters" form words in the mind of the programmer. Simply pursuing compactness for the sake of "mad skillz props" doesn’t help anyone. It just makes for <a href='http://en.wikipedia.org/wiki/Write-only_language'>write-only code</a>. </p> <p> Were I to reinvent computing (in my fictional JoshTrek show where the computer understands all spoken words with 200% accuracy), I would replace the symbols with actual meaningful words, then separate them into chunks with punctuation, much like sentences. </p> <p> this </p> <pre><code>x[⍋x←6?40]</code></pre> would become<pre><code>deal 6 of 1 to 40 => x, sort_ascending, index x</code></pre> <p> The symbols are replaced with words and the ordering swapped, left to right. It still takes some training to understand what it means, but far less. It’s not as compact but far easier to pick up. </p> <p> So, in summary, APL is cool and has a lot to teach us, but I don’t think I’d ever use it in my daily work. </p> <p> </p> <p> addendum: </p> <p> Since writing this essay I discovered <a href='http://tinyurl.com/55cab8 '>Q</a>, also by Arthur Whitney, that expands K’s terse syntax, but I still find it harder to read than it should be. </p> http://joshondesign.com/2014/10/23/thoughtsapl\nElectron 0.4 beta 3I am unhappy to announce the release of Electron 0.4 beta 3. <p> What's that? <b>unhappy</b>?! Well...... </p> <p> I haven't done a release quite some time. Part of this delay is from a complete refactoring of the user interface; but another big chunk of time comes from trying to build Electron with Atom Shell. </p> <p> <a href='https://github.com/atom/atom-shell'>AtomShell</a> is a tool that bundles WebKit/Chromium and NodeJS into a single app bundle. This means developers can download a single app with an icon instead of running Electron from the command line. It might even let us put it into the various app stores some day. </p> <p> Unfortunately, the switch to AtomShell hasn't been as smooth as I would like. The Mac version builds okay but I have yet to get Windows to work. There seems to be some conflict between the version of Node that the native serial port module uses and the version of Node inside of AtomShell. While I'm sure these are solvable problems I don't want to hold back the rest of Electron. It's still useful even if you have to launch it from the command line. So... </p> <p> </p> <h3 id="id38659">Electron 0.4 beta 3</h3> <p> You can download a Mac .app bundle <a href='http://joshondesign.com/p/apps/electron/builds/0.4_b3/'>from here</a>, or <a href='https://github.com/joshmarinacci/ElectronIDE'>check out the source</a> and run <code>node electron</code> to start it from the command line. The new file browser works as do the various dialogs. Compiler output shows up in the debug panel. You can upload through the serial port but the serial port console is still disabled (due to other bugs I'm still working through). </p> <p> Undoubtedly many things are still broken during the transition from the old UI to the new. Please, please, please <a href='https://github.com/joshmarinacci/ElectronIDE/issues'>file issues on github</a>. I'll get to them ASAP. </p> <p> Thanks, Josh </p> http://joshondesign.com/2014/10/20/electron04b3\nPhoton, a commandline shell in less than 300 lines of JavaScriptI have a problem. Sometimes I get something into my head and it sticks there, taunting me, until I do something about it. Much like the stupid song stuck in your brain, you must play the song to be released from it's grasp. So it is with software. <p> Last week I had to spend a lot of time in Windows working on a port of Electron. This means lots of Node scripts and Git on the command line. </p> <p> </p> <h3 id="id48099">Windows Pains</h3> <p> </p> <p> It may sound like it sometimes, but I really don't hate Windows. It's a fine GUI operating system but the command shell sucks. Really, really bad. Powershell is an improvement but still pretty bad. There has to be something better. I don't want to hate myself and throw my laptop across the room while coding. It dampens productivity. <a href='http://joshondesign.com/2014/09/30/msfixwindows'>This blog</a> was the result of that rage face. I tiny birdy told me things will get a lot better in Windows 10. I sure hope so. </p> <p> In the past I would have used Cygwin, which is a port of Bash and a bunch of unix utilities. Sadly it never worked very well (getting POSIX compliant apps to run on Windows is just a big ball of pain) and support has dwindled in recent years. </p> <p> Then something happened. After pondering for a while I realized I didn't actually care about having standard Unix utilities. Really I just want the Bash <i>interface</i>. I want a command line interpreter that has a proper history, tab completion, and directory navigation. I want <code>ls</code> and <code>more</code> and <code>cd</code>. I don't actually care if they are spec compliant and can be used in Bash shell scripts. I don't really care about shell scripts at all, since I write everything in Node now. I just want the interface. </p> <p> I could make a new shell, something simple that would get the job done. Node is already ported to Windows, it's built around streams, and NPM gives me access to endless existing modules. That's 90% of the work already done. I just need to stitch it together. </p> <p> </p> <h3 id="id2253">Photon</h3> <p> And so Photon was born. </p> <p> Photon is about 250 lines of Javascript that give a command line with <code>ls</code>, <code>cp</code>, <code>mv</code>, <code>rm</code>, <code>rmdir</code>, <code>mkdir</code>, <code>more</code>, <code>pwd</code>, and the ability to call other programs like <code>git</code>. It has a very simple form of tab completion (rather buggy), and uses ANSI colors and tables for formatting. (For some reason there are approximately 4.8 billion ANSI color modules for Node). </p> <p> All you need to do is <code>npm install -g photonsh</code> then <code>photonsh</code> to get this: </p> <p> <img src='http://joshondesign.com/images/65850_iTermScreenSnapz011.png' alt='Photon Shell screenshot'/> </p> <p> </p> <p> Most features were trivial to implement. Here is the function for <code>cp</code>. </p> <pre><code> cp: function(a,b) { if(!fs.existsSync(a)) return fileError("No such file: ",a); if(!fs.statSync(a).isFile()) return fileError("Not a file: ",a); var ip = fs.createReadStream(path.join(cwd,a)); var op = fs.createWriteStream(path.join(cwd,b)); ip.pipe(op); },</code></pre> <p> Pretty much exactly what you would expect. For the buffered editor with history I used Node's built in <code>readline</code> module which includes callbacks for tab completion. </p> <p> </p> <p> </p> <h3 id="id39930">The hard part</h3> <p> The grand irony here is that I wrote it because of my Windows pain but have yet to actually run it on Windows. I stopped that Windows porting effort for other reasons; so now I just have this program I randomly wrote. Rather than waste the man-months of effort (okay, it was really only about 3 hours), I figured something like this should be shared with the world so that others might learn from my mistakes. </p> <p> Speaking of mistakes, Photon is horribly buggy and you probably shouldn't run it. No really, it could totally delete your hard drive and stuff. More importantly, Node TTY support is iffy. It turns out Unix shells are very hard to write because of lots of semi-documented assumptions. Go try to write Xterm sometime. There's a reason few people have done it. </p> <p> In theory a unix shell is simple. You exec a program and pipe it's output to stdout until it's done. The same with input. But what about buffering? But what about ANSI codes? But what about raw keyboard input? Apparently there is a whole world of adhoc specs for how command line apps do 'interactive' things. Running grep from exec is easy. Running vim is not. </p> <p> In the end I found pausing Node's own REPL interface then execing with the 'inherit' flag worked most of the time. I'm sure there's a better way to do it, but casual Googling with Bing hasn't found it yet. </p> <p> </p> <h3 id="id80973">Onward!</h3> <p> So where does Photon go from here? I have no idea. There's tons of things you could do with it. Node can stream anything, so copying a remote URL to a local file should be trivial. Or you could build a text mode raytracer. Whatever. The choice is yours. Choose wisely. Or don't. <a href='https://github.com/joshmarinacci/photonsh'>The code will still be here</a> (on github). </p> <p> Enjoy! </p> http://joshondesign.com/2014/10/15/photonsh\nTypographic Programming Wrapup<i>I need to move on to other projects so I’m wrapping up the rest of my ideas in this blog. Gotta get it outta my brainz first.</i> <p> The key concept I’ve explored in <a href='http://joshondesign.com/tags/programming'>this series</a> is that the code you see in an editor need not be identical to what is stored on disk, or the same as what is sent to the compiler. If we relax this constraint then a world of opportunity opens up. We’ve been writing glorified text files for 40 years. We can do better. Let’s explore. </p> <p> </p> <h3 id="id16351">Keywords</h3> <p> Why can’t you name a variable <code>for</code>? Because in many common languages <code>for</code> is a reserved word. You, as the programmer, aren’t allowed to use <code>for</code> because it represent a particular loop construct. The underlying compiler doesn’t actually care of course. It doesn’t care about the name of <i>any</i> of your variables or other words in your code. The compiler just needs them to be unique symbols, some of which are mapped to existing constructs like conditionals and loops. </p> <p> If the compiler doesn’t care then why can’t we do it? Because the <i>parser</i> (the ‘front end’ of the compiler) does care. The parser needs to <i>unambiguously</i> transform a stream of ASCII text into an abstract syntax tree. It’s the unambiguous part that’s the trouble. The syntax restrictions in most common languages are there to make the parser happy. If the parser was magical and could just "know what we meant" then any syntax could be used. Perhaps even syntax that made more sense to the human rather than the computer. </p> <p> Fundamentally, this is what typographic programming does. It lets us tell the parser which text is what without using specific syntax rules. Instead we use color or font choices to indicate whether a given chunk of text is a variable or keyword or something else. Of course editing in such a system would be a pain, but we already know how to solve that problem. Graphical word processors are proof that it is possible. Before we get to <i>how</i> we solve it let us consider <i>why</i>. Would such a system have enough benefits to outweigh the cost of building it. What <i>new</i> things could we do? </p> <p> </p> <h3 id="id58800">Nothing’s reserved</h3> <p> If we use typography to indicate syntax, then keywords no longer need to be reserved. Any keyword could be used as a variable and any text string could be used as a keyword. You could swap <code>for</code> with <code>fore</code> or <code>thusly</code>. You could use spaces in keywords as <code>for each of</code>. These aren’t very useful examples but the compiler could easily handle them. </p> <p> With the syntactic restrictions lifted we are free to explore new control flow constructs. How about <code>forever</code> to mean an infinite loop and <code>10 times</code> for standard for fixed length loops? It’s all the same to the compiler but the human reading it would better understand the meaning. </p> <p> </p> <h3 id="id39542">Custom Operators</h3> <p> If nothing is reserved then user defined operators become easy. After all; what is an operator but a function with a single letter name from a restricted character set. In Python <code>4 + 5</code> is just sugar for <code>add(4,5)</code>. </p> <p> With no syntax rules anything could be an operator. Operators could have multiple letter names, or other symbols from the full unicode set. The only reason operators are given special treatment to begin with is because they represent functions which are so commonly used (like arithmetic) that we want a shorthand. With free syntax we can create a shorthand for the functions that are useful <i>to the task at hand</i> rather than the abstract general purpose tasks the language inventors imagined. </p> <p> </p> <p> Let’s look at something more concrete. Using complex numbers and vectors is common in graphics programming, but we have to use clumsy and verbose syntax in most languages. This is sad. Mathematics already has invented compact notation for these concepts but we can’t use them due to ASCII limitations. Without these limitations we could add complex numbers with the plus sign like this: </p> <pre><code>A +B</code></pre> <p> instead of </p> <pre><code>complexAdd(A,B)</code></pre> <p> </p> <p> To help the programmer remember these are complex numbers they could be rendered in a different color. </p> <p> There are two ways to multiply vectors: the dot product and the cross product. They have very different meanings. With full unicode we could use the correct symbols like this: </p> <pre><code>A &#8901; B // dot product</code></pre><pre><code>A &#x2a2f; B // cross product</code></pre> <p> No ambiguity at all. It would be too much to expect a language to support every possible notation. Much better instead to have a language that lets the programmer create their own notation. </p> <p> </p> <p> </p> <h3 id="id72793">Customization in Practice</h3> <p> </p> <p> So how would this work in practice? At some point the code must be transformed into something the compiler understands. Let’s postulate a hypothetical language called <i>X</i>. X has no syntax rules, only the semantic rules of it’s AST. To tell the complier how to convert the code into the AST we must provide our own rules. Something like this. </p> <pre><code>fun => function cross => cross dot => dot |x| => magnitude(x) fun intersection(V,R) { return V dot R / |V|; }</code></pre> <p> </p> <p> We have now defined a mini language in X which still compiles to the same syntactic structure. </p> <p> Of course typing all of these rules in every file (or compilation unit) would be a huge pain, so we could include them much as we include external libraries. </p> <pre><code>@include math.rules fun intersection(V,R) { return V dot R / |V|; }</code></pre> <p> Most importantly, not only would the compiler understand these rules but <i>so would the editor</i>. The editor can now indicate that <code>V &#8901; R</code> is valid only if they are both vectors. It could enforce the rules from the rule file. Now our code is limited only by the imagination of our rule writers, not the fixed compiler. </p> <p> In practice, were X to become popular, we would not see everyone making up their own rules. Instead usage would gather around a few popular rulesets much as JavaScript gathered around a few popular libraries like JQuery. We might call each of these rulesets <i>dialects</i>, each a particular flavor derived from the base X language. Custom DSLs would become trivial to implement. It would be common for developers to use one or two "standard" dialects for most of their code but use a special purpose dialect for a particular task. </p> <p> The important thing here is that the language no longer has a <i>fixed</i> syntax. It can adapt and evolve as needed. All without changing the compiler. </p> <p> </p> <p> </p> <h3 id="id41421">How do you edit?</h3> <p> I hope I’ve convinced you that a flexible syntax delimited by typography is useful. Many common idioms like iteration, accumulation, and data structure traversals could be distilled to concise syntax. And if it has problems then we can tweak it. </p> <p> There is one big problem though. How would you actually <i>edit</i> code like this? </p> <p> Fortunately this problem has already been solved by graphical word processors. These tools use color, font, size, weight and spacing to distinguish one element from another. Choosing the mode for a variable is as simple as selecting it with the cursor and using a drop down. </p> <p> Manually highlighting a entire page of code would quickly grow tedious, of course. For common operations, like declaring a variable, the programmer could type a special symbol like <code>@</code>. This tells the editor that the next letters are a variable name. The programmer ends it with @ or by pressing the spacebar or return key. This @ symbol doesn’t exist in the code. It is simply to indicate to the editor that the programmer wants to be in ‘variable’ mode. Once the mode is finished the @’s go away and the text is rendered with the ‘variable’ font. This is no different than using star word star to indicate bold in Markdown text. The stars never appear in the rendered text. </p> <p> The choice of the <code>@</code> symbol doesn't matter as long as it's easy with the user's native keyboard. @ is good for US keyboards. French or Russians might use something else. </p> <p> </p> <h3 id="id20820">Resolving Ambiguity</h3> <p> Even using manual markup might become tedious, though. Fortunately the editor can usually figure out the meaning of any given token by using the dialect rules. If the rules indicate that # equals division then the editor can just do the right thing. Using manual highlighting would only be necessary if the <i>dialect itself</i> introduces an ambiguity. (ex: # means division and also the start of a hex value) </p> <p> What about multiplying vectors? You could type in either of the two proper symbols, but the average keyboard doesn’t support those directly. You’d have to memorize a unicode code point or use a floating dialog. Alternatively, we could use code completion. If you type &#42; then the editor knows this must be either dot or cross product. It provides only those two choices in a drop down, much as we auto-complete method names today. </p> <p> Using a syntax free language does not fully remove the need to resolve ambiguity, it just moves the resolution process to edit time rather than compile time. This is good. The human is present at edit time and can explain to the computer was is correct. The human is not there at compile time, so any ambiguity must result in an error that the human must come back and fix. Furthermore, resolving the ambiguity need only happen once, when the human types it, not every time the code is compiled. This will further reduce code regressions when other parts of the system change. </p> <p> Undoubtedly we would discover more edge cases, but these are all solvable. Modern GUI word processors and spreadsheets prove this. A more challenging issue is version control. </p> <p> </p> <h3 id="id38816">Versioning</h3> <p> Code changes over time. It must be versioned. I don’t know why it took 40 years for us to invent distributed version control systems like Git, but at least we have it now. It would be a shame to give that up just as we’ve gotten the world on board. The problem is Git and other VCSs don’t really understand code. They just understand text. There are really only two ways to solve this: </p> <p> 1) modify git, and the other tools around it (diff viewers, github’s website, etc.) to support binary diffs specific to our new system. </p> <p> 2) make the on disk format be pure text. </p> <p> Clearly option 1 is a non-starter. One day, once language X takes over the world, we could ask the GitHub team to add support for X diffs, but that’s a long ways off. We have to start with option 2. </p> <p> You might think I’m going back on what I said at the start. After all, I stated we should no longer be writing code as text on disk, but that is exactly what I am suggesting. What I don’t want is to store <i>the same thing</i> that we edit. From the VCS’s point of view the editor and visual representation are irrelevant. The only thing that matters is what is the file on disk. X needs a canonical on serialization format. Regardless of what tool you use to edit X, as long as it saves to the same format we are fine. This is no different than SQL or HTML. Everyone has their favorite tool, but they all write to the same format. </p> <p> </p> <h3 id="id4130">Canonical Serialization Format.</h3> <p> X’s serialization format should obviously be plain text. < 128bit ASCII would be fine, though I think we could handle UTF8 easily. Most modern diff tools can work with UTF8 cleanly, so Japanese comments and math symbols would come through just fine. </p> <p> The X format should also be unambiguous. Variables are marked up explicitly as variables. Operators as operators. There should be no need for the parser to guess at anything or interpret syntax rules. We could use one of the many existing formats like JSON, XML, or even LaTex. It doesn’t really matter since humans will rarely need to look at them. </p> <p> But.... since we are defining a new serialization format anyways, there are a few useful things we could add. </p> <p> </p> <h3 id="id11764">Code as Graph</h3> <p> Code is really just a graph. Graphs can be serialized in many ways. Rather than using function names inline they could be represented by identifiers which point to a lookup table. Then, if a function is renamed the code only changes in one place rather than at every point in the code where the function is used. This creates a semantic diff that the diff tool could render as ‘function Y renamed to Z’. </p> <pre><code>v467 = foo v468 = bar v469 = baz fun v467 () { return v468 + v469; }</code></pre> <p> Semantic diff-ing could be very powerful. Any refactoring should be reducible to its essential meaning: <i>moved X to a new class</i> or <i>extracted Y from Z</i>. Whitespace changes would be ignored (or never stored in the first place). Commit messages could be context aware: <i>changed X in the unit test for Y</i> and <i>added logging to Z</i>. Our current tools just barely understand when a file has been renamed instead of deleted and a new one added. There’s a lot of room for innovation here. </p> <p> </p> <h3 id="id32518">WrapUp</h3> <p> I hope I’ve convinced you there is value in this approach. Building language X still won’t be easy. To be viable we have to make a compiler, useful dialect definitions, and a visual editor; all at the same time. That’s a lot of work before anyone else can use it. Building on top of existing tools like Eclipse or Atom.io would help, but I know it’s still a big hill to climb. Trust me. The view will be worth it. </p> http://joshondesign.com/2014/10/06/typoplwrapup\nHow Microsoft can fix Windows. They have the Technology.<i>Note: I’m a research at Nokia but this blog does not represent my employer. I didn’t move to Microsoft and I’ve never been on the Windows Phone team. These ill considered opinions are my own.</i> <p> Windows 10 seems nice and all, but it doesn’t do anything to make me care. Fortunately Microsoft can fix all of Windows problems if only they follow my simple multistep plan. You’re welcome. </p> <p> </p> <h3 id="id50492">First, Fix the damn track pads.</h3> <p> The problem: my employer gave me a very nice, probably expensive, laptop. It’s name rhymes with a zinc fad. It’s specs sure are nice. It’s very fast. but the track pad is horrible. Every Windows laptop I’ve tried (which is a lot because Jesse likes to ‘do computers’ at Costco) has a horrible trackpad. why is this so hard? I simply can’t bear to use this laptop without a mouse. the cursor skips around, gestures don’t work all the time, and it clicks when i don’t and it doesn’t click when I do. </p> <p> The fix: <i>Take over trackpad drivers</i> and make a new quality test for Win10 certification. It used to be that every mouse and keyboard needed it’s own driver, and they were usually buggy. I bought Logitech trackballs in the 90s because they seemed to be the only guys who cared to <i>actually test</i> their drivers (and the addon software was mildly useful). Sometime in the early USB days (Win98ish?) MS made a default mouse and keyboard driver that all devices had to work with. Since then it’s never been an issue. Plug in any mouse and it works perfectly. 100% of the time. MS needs to do the same for trackpads. </p> <p> Please write your own driver for the N most popular chipsets, standardize the gesture support throughout the OS, then mandate a certain quality level for any laptop that wants to ship windows 10. Hey OEM: If it’s not a Macbook Air quality trackpad experience then no Windows 10 for you. </p> <p> </p> <h3 id="id70689">Make A Proper Command Line Shell</h3> <p> Hide PowerShell, Cygwin, whatever Visual Studio ships with (it has a shell, right?) and the ancient DOS prompt. Make a proper terminal emulator with Bash. (Fix the bugs first). Build it in to the OS, or at least as a free developer download (one of those MS Plus thingies you promised us). </p> <p> This shell should be fully POSIX compliant and run all standard Unix utilities. I understand you might worry that full POSIX would let developers port code from another platform instead of writing natively for you. That is very astute thinking… for 1999. Unfortunately we live in the futuristic hellscape that is 2014. You need to make it as easy as possible for someone to port code to Windows. Eliminate all barriers. Any standard Unix command line program should compile out of the box with no code changes. Speaking of which.. </p> <p> </p> <h3 id="id73754">Give your C++ compiler a GCC mode. </h3> <p> For some reason all ANSI C code compiles perfectly on Mac and Linux but requires special #IFDEFs for Windows.h. Slightly different C libs? Sightly different calling syntax? None of this <i></i>cdecl vs <i></i>stdcall nonsense. Make a <code>--gcc</code> flag so that bog standard Linux code compiles with zero changes. Then submit patches to GNU Autoconf and the other make file builders so that this stuff just works. Just fix it. </p> <p> </p> <p> </p> <h3 id="id9036">Build a Package Manager</h3> <p> Now that we have a proper command line Windows needs a package manager. I use Brew on Mac and it works great. I can install any package, share formulas for new packages, and keep everything up to date. I can grab older versions of packages if I want. I can switch between them. Everything all works. Windows <b>needs</b> this, and it should work from both a GUI and CLI. </p> <p> I know Windows has NuGet and Chocolatey and supposedly something is coming called OneGet. There needs to be one official system that really works. It handles all dependencies. And it should be easy to use with no surprises. </p> <p> <i>"What surprises?"</i> I hear you say? I wanted to install Node. I couldn’t figure out which package manager to use so I chose Chocolatey since it seemed to be all the new hotness. I go to their website and find four different packages: Node JS (Install), Node JS, Node JS (Command Line), Node Package Manger. What? Which do I choose? They all have a lot of downloads. On every other platform you just install Node. NPM is built in. There are no separate packages. It’s all one thing because you can’t use part of it without the rest. </p> <p> NodeJS is an alias for NodeJS.commandline. NodeJS.commandline installs to the Chocolatey lib dir. NodeJS.install installs to a system dir. It turns out Chocolatey has both installable and portable packages. As near as I can tell they are identical except for the install path, which is something I shouldn’t have to care about anyway. Oh, and one way will add it to your path and the other won’t. What? Why should I have to care about the difference? Fix it! </p> <p> I really hope OneGet straightens all of this nonsense. There should be just one way to do things and it must work 100% of the time. I know Microsoft has MSDN subscriptions to sell, but that’s just another repo source added to the universal package manager. </p> <p> </p> <h3 id="id98230">Make Visual Studio be everywhere.</h3> <p> Visual Studio is really an impressive piece of software. It’s really good at what it does. The tooling is amazing. Microsoft needs to let the world know by making it <b>be everywhere</b>. </p> <p> If you are on Windows, you should get a free copy of VS to download. In theory this is the idea behind Visual Studio Express. So why do I still use Atom or Sublime or even JEdit on Windows? Partly because of the aforementioned package manager problem, but also because Visual Studio isn’t good for all kinds of coding. </p> <p> Visual Studio is primarily a C/C++ editor, meant for MS’s own projects (and now hacked up for WinPhone and presumably WinRT). They should make it good for everything. </p> <p> Are you a Java programmer? VS should be your first choice. It should have great Java language support and even version the JDKs with the aforementioned package manager. </p> <p> Are you a web developer? VS should have awesome HTML and JavaScript support, with the ability to edit remote files via sftp. (Which Atom still doesn't have, either, BTW). </p> <p> And all of this should be through open hackable plugins, also managed through the package manager. VisualStudio should be so good and so fast that it’s the only IDE you need on Windows, no matter what you are coding. </p> <p> Why should Microsoft do this? After all, they would be putting a lot of effort into supporting developers who don’t code for their platform. Because Microsoft needs developer mindshare. </p> <p> I know very few topflight developers who use Windows as their main OS. Most use Macbook Pros or Linux laptops. MS needs to make a development experience <b>so good</b> that programmers will <b>want</b> to use Windows, even if it’s just for web work. </p> <p> Once I use Windows every day I might take a look at MS’s other stacks. If I’m already using Visual Studio for my JavaScript work then I’d be willing to take a look at developing for Windows Phone; especially if it was a single download within a program I already have installed. Fix it! </p> <p> </p> <h3 id="id305">Buy VMWare.</h3> <p> You want me to test new versions of Windows? It should be a single click to download. You want me to use your cloud infrastructure? If I could use Visual Studio to create and manage VM instances, then it’s just a single button to deploy an app from my laptop to the cloud. MS’s cloud, where they real money from the movie is made. Buy VM ware to make it happen if you need to. I don’t care. Just fix it! </p> <p> </p> <p> </p> <h3 id="id75986">Be open and tell the world.</h3> <p> MS has always had a problem with openness. They have great technology but have always felt insular. They build great things for Windows Devs and don’t care about the rest of the world. Contribute to the larger community. Make Visual Studio for all sorts of development, even those that don’t add to the bottom line. </p> <p> Maybe Visual Studio Express already has all sorts of cool plugins that make web coding awesome. I will never know because MS doesn’t market to me. All I hear is “please pretty please make some crappy Windows Phone apps”. </p> <p> Maybe OneGet fixes all of the package management problems, but I didn’t even know about it until I was forced to use a Windows laptop and did my own research. </p> <p> </p> <h3 id="id49374">Fix It !</h3> <p> Here is the real problem. MS has become closed off. An ecosystem unto itself. This is a great strategy if you are Apple, but you aren’t. Your a software company become a cloud company. You must become more open if you expect your developer mindshare to grow. And from that mindshare new platforms will grow. When MS releases their Windows smart watch, or the Windows Toaster, they will find it a lot easier to get developers on board if they’ve built an open community up first. Efforts like CodePlex are nice, but this philosophy has to come from the top. </p> <p> Good Luck Nadella. </p> http://joshondesign.com/2014/09/30/msfixwindows\n60sec Review: Rust LanguageLately I've been digging into Rust, a new programming language sponsored by Mozilla. They recently rewrote their docs and <a href='http://blog.rust-lang.org/2014/09/15/Rust-1.0.html'>announced a roadmap to 1.0</a> by the end of the year, so now is a good time to take a look at it. I went through the <a href='http://doc.rust-lang.org/guide.html'>new Language Guide</a> last night then wrote <a href='https://twitter.com/joshmarinacci/status/512344513725992960/photo/1'>a small ray tracer</a> to test it out. <p> One of the biggest success stories of the last three decades of programming is memory safety. C and C++ may be fast but it's very easy to have dangling pointers and buffer overflows. Endless security exploits come back to this fundamental limitation of C and C++. Raw pointers == lack of memory safety. </p> <p> Many modern languages provide this memory safety, but they do it at runtime with references and garbage collection. JavaScript, Java, Ruby, Python, and Perl all fall into this camp. They accomplish this safety at the cost of runtime speed. While they are typically JITed today instead interpreted, they are all slower than C/C++ because of their runtime overhead. For many tasks this is fine, but if you are building something low level or where speed really matters, then you probably go back to C/C++ and all of the problems it entails. </p> <p> Rust is different. Rust is a statically typed <i>compiled</i> language meant to target the same tasks that you might use C or C++ for today, but it's whole purpose in life is to promote memory safety. By design, Rust code can't have dangling pointers, buffer overflows, or a whole host of other memory errors. Any code which would cause this <b>literally can't be compiled</b>. The language doesn't allow it. I know it sounds crazy, but it really does work. </p> <p> Most importantly, Rust achieves all of these memory safety guarantees at compile time. There is no runtime overhead, making the final code as fast as C/C++, but far safer. </p> <p> I won't go into how all of this work, but the short description is that Rust uses several kinds of pointers that <i>let the compiler prove</i> who owns memory at any given moment. If you write up a situation where the complier can't predict what will happen, it won't compile. If you can get your code to compile then you are <i>guaranteed to be memory safe</i>. </p> <p> I plan to do all of my native coding in Rust when it hits 1.0. Rust has a robust FFI so interfacing with existing C libs is quite easy. Since I absolutely hate C++, this is a big win for me. :) </p> <p> Coming into Rust I was worried the pointer constraints would make writing code difficult, much like <a href='http://prog21.dadgum.com/38.html'>puzzle languages</a>. I was pleasantly surprised to find it pretty easy to code in. It's certainly more verbose than a dynamic language like JavaScript, but I was able to convert a JS ray tracer to Rust in about an hour. The resulting code roughly looks like what you'd expect from C, just with a few differences. Let's take a look. </p> <p> First, the basic type definitions. I created a Vector, Sphere, Color, Ray, and Light class. Rust doesn't really have classes in the C++/Java sense, but it does have structs enhanced with method implementations, so you can think of them similar to classes. </p> <pre><code>use std::num; struct Vector { x:f32, y:f32, z:f32 } impl Vector { fn new(x:f32,y:f32,z:f32) -> Vector { Vector { x:x, y:y, z:z } } fn scale(&self, s:f32) -> Vector { Vector { x:self.x*s, y:self.y*s, z:self.z*s } } fn plus(&self, b:Vector) -> Vector { Vector::new(self.x+b.x, self.y+b.y, self.z+b.z) } fn minus(&self, b:Vector) -> Vector { Vector::new(self.x-b.x, self.y-b.y, self.z-b.z) } fn dot(&self, b:Vector) -> f32 { self.x*b.x + self.y*b.y + self.z*b.z } fn magnitude(&self) -> f32 { (self.dot(*self)).sqrt() } fn normalize(&self) -> Vector { self.scale(1.0/self.magnitude()) } } struct Ray { orig:Vector, dir:Vector, } struct Color { r:f32, g:f32, b:f32, } impl Color { fn scale (&self, s:f32) -> Color { Color { r: self.r*s, g:self.g*s, b:self.b*s } } fn plus (&self, b:Color) -> Color { Color { r: self.r + b.r, g: self.g + b.g, b: self.b + b.b } } } struct Sphere { center:Vector, radius:f32, color: Color, } impl Sphere { fn get_normal(&self, pt:Vector) -> Vector { return pt.minus(self.center).normalize(); } } struct Light { position: Vector, color: Color, }</code></pre> <p> Without knowing the language you can still figure out what's going on. Types are specified after the field names, with f32 and i32 meaning integer and floating point values. There's also a slew of finer grained number types for when you need tight memory control. </p> <p> Next up I created a few constants. </p> <pre><code>static WHITE:Color = Color { r:1.0, g:1.0, b:1.0}; static RED:Color = Color { r:1.0, g:0.0, b:0.0}; static GREEN:Color = Color { r:0.0, g:1.0, b:0.0}; static BLUE:Color = Color { r:0.0, g:0.0, b:1.0}; static LIGHT1:Light = Light { position: Vector { x: 0.7, y: -1.0, z: 1.7} , color: WHITE };</code></pre> <p> Now in my <code>main</code> function I'll set up the scene and create a lookup table of one letter strings for text mode rendering. </p> <pre><code>fn main() { println!("Hello, worlds!"); let lut = vec!(".","-","+","*","X","M"); let w = 20*4i; let h = 10*4i; let scene = vec!( Sphere{ center: Vector::new(-1.0, 0.0, 3.0), radius: 0.3, color: RED }, Sphere{ center: Vector::new( 0.0, 0.0, 3.0), radius: 0.8, color: GREEN }, Sphere{ center: Vector::new( 1.0, 0.0, 3.0), radius: 0.3, color: BLUE } );</code></pre> <p> Now lets get to the core ray tracing loop. This looks at every pixel to see if it's ray intersects with the spheres in the scene. It should be mostly understandable, but you'll start to see the differences with C. </p> <pre><code> for j in range(0,h) { println!("--"); for i in range(0,w) { //let tMax = 10000f32; let fw:f32 = w as f32; let fi:f32 = i as f32; let fj:f32 = j as f32; let fh:f32 = h as f32; let ray = Ray { orig: Vector::new(0.0,0.0,0.0), dir: Vector::new((fi-fw/2.0)/fw, (fj-fh/2.0)/fh,1.0).normalize(), }; let mut objHitObj:Option&lt;(Sphere,f32)> = None; for obj in scene.iter() { let ret = intersect_sphere(ray, obj.center, obj.radius); if ret.hit { objHitObj = Some((*obj,ret.tval)); } }</code></pre> <p> The <code>for</code> loops are done with a <code>range</code> function which returns an iterator. Iterators are used extensively in Rust because they are inherently safer than direct indexing. </p> <p> Notice the <code>objHitObj</code> variable. It is set based on the result of the intersection test. In JavaScript I used several variables to track if an object had been hit, and to hold the hit object and hit distance if it did intersect. In Rust you are encouraged to use options. An Option is a special enum with two possible values: None and Some. If it is None then there is nothing inside the option. If it is Some then you can safely grab the contained object. Options are a safer alternative to null pointer checks. </p> <p> Options can hold any object thanks to Rust's generics. In the code above I tried out something tricky and surprisingly it worked. Since I need to store several values I created an option holding a tuple, which is like a fixed size array with fixed types. <code>objHitObj</code> is defined as an option holding a tuple of a <code>Sphere</code> and an <code>f32</code> value. When I check if <code>ret.hit</code> is true I set the option to <code>Some((<b>obj,ret.tval))</code>, meaning the contents of my object pointer and the hit distance. </p> </b> <p> Now lets look at the second part of the loop, once ray intersection is done. </p> <pre><code> let pixel = match objHitObj { Some((obj,tval)) => lut[shade_pixel(ray,obj,tval)], None => " " }; print!("{}",pixel); } }</code></pre> <p> Finally I can check and retrieve the option values using an <code>if</code> statement or a <code>match</code>. Match is like a <code>switch</code>/<code>case</code> statement in C, but with super powers. It forces you to account for all possible code paths. This ensures there are no mistakes during compilation. In the code above I match the some and none cases. In the <code>Some</code> case it pulls out the nested objects and gives them the names obj and tval, just like the tuple I stuffed into it earlier. This is called <i>destructuring</i> in Rust. If there is a value then it calls <code>shadepixel</code> and returns character in the look up table representing that grayscale value. If the <code>None</code> case happens then it returns a space. In either case we know the <code>pixel</code> variable will have a valid value after the match. It's impossible for <code>pixel</code> to be null, so I can safely print it. </p> <p> The rest of my code is basically vector math. It looks almost identical to the same code in JavaScript, just strongly typed. </p> <pre><code>fn shade_pixel(ray:Ray, obj:Sphere, tval:f32) -> uint { let pi = ray.orig.plus(ray.dir.scale(tval)); let color = diffuse_shading(pi, obj, LIGHT1); let col = (color.r + color.g + color.b) / 3.0; (col * 6.0) as uint } struct HitPoint { hit:bool, tval:f32, } fn intersect_sphere(ray:Ray, center:Vector, radius:f32) -> HitPoint { let l = center.minus(ray.orig); let tca = l.dot(ray.dir); if tca &lt; 0.0 { return HitPoint { hit:false, tval:-1.0 }; } let d2 = l.dot(l) - tca*tca; let r2 = radius*radius; if d2 > r2 { return HitPoint { hit: false, tval:-1.0 }; } let thc = (r2-d2).sqrt(); let t0 = tca-thc; //let t1 = tca+thc; if t0 > 10000.0 { return HitPoint { hit: false, tval: -1.0 }; } return HitPoint { hit: true, tval: t0} } fn clamp(x:f32,a:f32,b:f32) -> f32{ if x &lt; a { return a; } if x > b { return b; } return x; } fn diffuse_shading(pi:Vector, obj:Sphere, light:Light) -> Color{ let n = obj.get_normal(pi); let lam1 = light.position.minus(pi).normalize().dot(n); let lam2 = clamp(lam1,0.0,1.0); light.color.scale(lam2*0.5).plus(obj.color.scale(0.3)) }</code></pre> <p> </p> <p> That's it. Here's the final result. </p> <p> <img src='http://joshondesign.com/images/98232_iTermScreenSnapz005.png' alt='text'/> </p> <p> </p> <p> So far I'm really happy with Rust. It has some rough edges they are still working on, but I love the direction they are going. It really could be a replacement for C/C++ in lots of cases. </p> <p> Buy or no buy? <b>Buy!</b> <a href='http://www.rust-lang.org'>It's free!</a> </p> http://joshondesign.com/2014/09/17/rustlang