Josh On DesignArt, Design, and Usability for Software EngineersWed Sep 17 2014 22:47:29 GMT+0000 (UTC)\n60sec Review: Rust LanguageLately I've been digging into Rust, a new programming language sponsored by Mozilla. They recently rewrote their docs and <a href='http://blog.rust-lang.org/2014/09/15/Rust-1.0.html'>announced a roadmap to 1.0</a> by the end of the year, so now is a good time to take a look at it. I went through the <a href='http://doc.rust-lang.org/guide.html'>new Language Guide</a> last night then wrote <a href='https://twitter.com/joshmarinacci/status/512344513725992960/photo/1'>a small ray tracer</a> to test it out. <p> One of the biggest success stories of the last three decades of programming is memory safety. C and C++ may be fast but it's very easy to have dangling pointers and buffer overflows. Endless security exploits come back to this fundamental limitation of C and C++. Raw pointers == lack of memory safety. </p> <p> Many modern languages provide this memory safety, but they do it at runtime with references and garbage collection. JavaScript, Java, Ruby, Python, and Perl all fall into this camp. They accomplish this safety at the cost of runtime speed. While they are typically JITed today instead interpreted, they are all slower than C/C++ because of their runtime overhead. For many tasks this is fine, but if you are building something low level or where speed really matters, then you probably go back to C/C++ and all of the problems it entails. </p> <p> Rust is different. Rust is a statically typed <i>compiled</i> language meant to target the same tasks that you might use C or C++ for today, but it's whole purpose in life is to promote memory safety. By design, Rust code can't have dangling pointers, buffer overflows, or a whole host of other memory errors. Any code which would cause this <b>literally can't be compiled</b>. The language doesn't allow it. I know it sounds crazy, but it really does work. </p> <p> Most importantly, Rust achieves all of these memory safety guarantees at compile time. There is no runtime overhead, making the final code as fast as C/C++, but far safer. </p> <p> I won't go into how all of this work, but the short description is that Rust uses several kinds of pointers that <i>let the compiler prove</i> who owns memory at any given moment. If you write up a situation where the complier can't predict what will happen, it won't compile. If you can get your code to compile then you are <i>guaranteed to be memory safe</i>. </p> <p> I plan to do all of my native coding in Rust when it hits 1.0. Rust has a robust FFI so interfacing with existing C libs is quite easy. Since I absolutely hate C++, this is a big win for me. :) </p> <p> Coming into Rust I was worried the pointer constraints would make writing code difficult, much like <a href='http://prog21.dadgum.com/38.html'>puzzle languages</a>. I was pleasantly surprised to find it pretty easy to code in. It's certainly more verbose than a dynamic language like JavaScript, but I was able to convert a JS ray tracer to Rust in about an hour. The resulting code roughly looks like what you'd expect from C, just with a few differences. Let's take a look. </p> <p> First, the basic type definitions. I created a Vector, Sphere, Color, Ray, and Light class. Rust doesn't really have classes in the C++/Java sense, but it does have structs enhanced with method implementations, so you can think of them similar to classes. </p> <pre><code>use std::num; struct Vector { x:f32, y:f32, z:f32 } impl Vector { fn new(x:f32,y:f32,z:f32) -> Vector { Vector { x:x, y:y, z:z } } fn scale(&self, s:f32) -> Vector { Vector { x:self.x*s, y:self.y*s, z:self.z*s } } fn plus(&self, b:Vector) -> Vector { Vector::new(self.x+b.x, self.y+b.y, self.z+b.z) } fn minus(&self, b:Vector) -> Vector { Vector::new(self.x-b.x, self.y-b.y, self.z-b.z) } fn dot(&self, b:Vector) -> f32 { self.x*b.x + self.y*b.y + self.z*b.z } fn magnitude(&self) -> f32 { (self.dot(*self)).sqrt() } fn normalize(&self) -> Vector { self.scale(1.0/self.magnitude()) } } struct Ray { orig:Vector, dir:Vector, } struct Color { r:f32, g:f32, b:f32, } impl Color { fn scale (&self, s:f32) -> Color { Color { r: self.r*s, g:self.g*s, b:self.b*s } } fn plus (&self, b:Color) -> Color { Color { r: self.r + b.r, g: self.g + b.g, b: self.b + b.b } } } struct Sphere { center:Vector, radius:f32, color: Color, } impl Sphere { fn get_normal(&self, pt:Vector) -> Vector { return pt.minus(self.center).normalize(); } } struct Light { position: Vector, color: Color, }</code></pre> <p> Without knowing the language you can still figure out what's going on. Types are specified after the field names, with f32 and i32 meaning integer and floating point values. There's also a slew of finer grained number types for when you need tight memory control. </p> <p> Next up I created a few constants. </p> <pre><code>static WHITE:Color = Color { r:1.0, g:1.0, b:1.0}; static RED:Color = Color { r:1.0, g:0.0, b:0.0}; static GREEN:Color = Color { r:0.0, g:1.0, b:0.0}; static BLUE:Color = Color { r:0.0, g:0.0, b:1.0}; static LIGHT1:Light = Light { position: Vector { x: 0.7, y: -1.0, z: 1.7} , color: WHITE };</code></pre> <p> Now in my <code>main</code> function I'll set up the scene and create a lookup table of one letter strings for text mode rendering. </p> <pre><code>fn main() { println!("Hello, worlds!"); let lut = vec!(".","-","+","*","X","M"); let w = 20*4i; let h = 10*4i; let scene = vec!( Sphere{ center: Vector::new(-1.0, 0.0, 3.0), radius: 0.3, color: RED }, Sphere{ center: Vector::new( 0.0, 0.0, 3.0), radius: 0.8, color: GREEN }, Sphere{ center: Vector::new( 1.0, 0.0, 3.0), radius: 0.3, color: BLUE } );</code></pre> <p> Now lets get to the core ray tracing loop. This looks at every pixel to see if it's ray intersects with the spheres in the scene. It should be mostly understandable, but you'll start to see the differences with C. </p> <pre><code> for j in range(0,h) { println!("--"); for i in range(0,w) { //let tMax = 10000f32; let fw:f32 = w as f32; let fi:f32 = i as f32; let fj:f32 = j as f32; let fh:f32 = h as f32; let ray = Ray { orig: Vector::new(0.0,0.0,0.0), dir: Vector::new((fi-fw/2.0)/fw, (fj-fh/2.0)/fh,1.0).normalize(), }; let mut objHitObj:Option&lt;(Sphere,f32)> = None; for obj in scene.iter() { let ret = intersect_sphere(ray, obj.center, obj.radius); if ret.hit { objHitObj = Some((*obj,ret.tval)); } }</code></pre> <p> The <code>for</code> loops are done with a <code>range</code> function which returns an iterator. Iterators are used extensively in Rust because they are inherently safer than direct indexing. </p> <p> Notice the <code>objHitObj</code> variable. It is set based on the result of the intersection test. In JavaScript I used several variables to track if an object had been hit, and to hold the hit object and hit distance if it did intersect. In Rust you are encouraged to use options. An Option is a special enum with two possible values: None and Some. If it is None then there is nothing inside the option. If it is Some then you can safely grab the contained object. Options are a safer alternative to null pointer checks. </p> <p> Options can hold any object thanks to Rust's generics. In the code above I tried out something tricky and surprisingly it worked. Since I need to store several values I created an option holding a tuple, which is like a fixed size array with fixed types. <code>objHitObj</code> is defined as an option holding a tuple of a <code>Sphere</code> and an <code>f32</code> value. When I check if <code>ret.hit</code> is true I set the option to <code>Some((<b>obj,ret.tval))</code>, meaning the contents of my object pointer and the hit distance. </p> </b> <p> Now lets look at the second part of the loop, once ray intersection is done. </p> <pre><code> let pixel = match objHitObj { Some((obj,tval)) => lut[shade_pixel(ray,obj,tval)], None => " " }; print!("{}",pixel); } }</code></pre> <p> Finally I can check and retrieve the option values using an <code>if</code> statement or a <code>match</code>. Match is like a <code>switch</code>/<code>case</code> statement in C, but with super powers. It forces you to account for all possible code paths. This ensures there are no mistakes during compilation. In the code above I match the some and none cases. In the <code>Some</code> case it pulls out the nested objects and gives them the names obj and tval, just like the tuple I stuffed into it earlier. This is called <i>destructuring</i> in Rust. If there is a value then it calls <code>shadepixel</code> and returns character in the look up table representing that grayscale value. If the <code>None</code> case happens then it returns a space. In either case we know the <code>pixel</code> variable will have a valid value after the match. It's impossible for <code>pixel</code> to be null, so I can safely print it. </p> <p> The rest of my code is basically vector math. It looks almost identical to the same code in JavaScript, just strongly typed. </p> <pre><code>fn shade_pixel(ray:Ray, obj:Sphere, tval:f32) -> uint { let pi = ray.orig.plus(ray.dir.scale(tval)); let color = diffuse_shading(pi, obj, LIGHT1); let col = (color.r + color.g + color.b) / 3.0; (col * 6.0) as uint } struct HitPoint { hit:bool, tval:f32, } fn intersect_sphere(ray:Ray, center:Vector, radius:f32) -> HitPoint { let l = center.minus(ray.orig); let tca = l.dot(ray.dir); if tca &lt; 0.0 { return HitPoint { hit:false, tval:-1.0 }; } let d2 = l.dot(l) - tca*tca; let r2 = radius*radius; if d2 > r2 { return HitPoint { hit: false, tval:-1.0 }; } let thc = (r2-d2).sqrt(); let t0 = tca-thc; //let t1 = tca+thc; if t0 > 10000.0 { return HitPoint { hit: false, tval: -1.0 }; } return HitPoint { hit: true, tval: t0} } fn clamp(x:f32,a:f32,b:f32) -> f32{ if x &lt; a { return a; } if x > b { return b; } return x; } fn diffuse_shading(pi:Vector, obj:Sphere, light:Light) -> Color{ let n = obj.get_normal(pi); let lam1 = light.position.minus(pi).normalize().dot(n); let lam2 = clamp(lam1,0.0,1.0); light.color.scale(lam2*0.5).plus(obj.color.scale(0.3)) }</code></pre> <p> </p> <p> That's it. Here's the final result. </p> <p> <img src='http://joshondesign.com/images/98232_iTermScreenSnapz005.png' alt='text'/> </p> <p> </p> <p> So far I'm really happy with Rust. It has some rough edges they are still working on, but I love the direction they are going. It really could be a replacement for C/C++ in lots of cases. </p> <p> Buy or no buy? <b>Buy!</b> <a href='http://www.rust-lang.org'>It's free!</a> </p> http://joshondesign.com/2014/09/17/rustlang\nImproving Regular Expressions with TypographyAfter the more <a href='http://joshondesign.com/2014/09/10/cyberprog'>abstract talk</a> I’d like to come back to something concrete. <a href='http://en.wikipedia.org/wiki/Regular_expression'>Regular Expressions</a>, or <i>regex</i>, are powerful but often inscrutable. Today let’s see how we could make them easier to use through typography and visualization without diminishing that power. <p> </p> <p> </p> <p> Regular Expressions are essentially a mini language embedded inside of your regular language. I’ve often seen regex written like this, </p> <pre><code>new Regex(“^\\s+([a-Z]|[0-9])\\w+\\$$”) // regex for special vars</code></pre> <p> then sometimes reformatted like this </p> <pre><code>//regex for special vars new Regex( “^” // start of line +“\\s+” // one or more whitespace +“([a-Z]|[0-9])” //one letter or number +”\w+” // one word +”\\$” // the literal dollar sign +”$” // end of line )</code></pre> <p> </p> <p> The fact that the author had to manually split up the text and add comments simply <i>screams</i> for a better syntax. Something far more readable but still terse. It would seem that readability and conciseness would be mutually exclusive. Anything more readable would also be far longer, eliminating much of the value of regex, right? </p> <p> Au contraire! We have a special power at our disposal: we can render whatever we want in the editor. The compiler doesn’t care as long as it receives the description in some canonical form. So rather than choosing readable vs terse we can cheat and do both. </p> <p> </p> <h2 id="id60968">Font Styling</h2> <p> Let’s start with the standard syntax but replacing delimiters and escape sequences with typographic choices. This regex looks for variable names in a made up language. Variables must start with a letter or number, followed by any word character, followed by a dollar sign. Here is the same regex using bold italics for all special chars: </p> <p> <img src='http://joshondesign.com:/images/16121_SafariScreenSnapz040.png' alt='text'/> </p> <p> </p> <p> </p> <p> Those problematic double escaped backslashes are gone. We can tell which dollar sign is a literal and which is a magic variable thanks to font styling. </p> <p> </p> <h2 id="id29377">Color and Brackets</h2> <p> Next we can turn the literals green and the braces gray, so it’s clear which part is actual words. We can also replace the <code>$</code> and <code>^</code> with symbols that actually look like beginning and ending of lines. Open and close up brackets or floor brackets. </p> <p> <img src='http://joshondesign.com/images/12535_SafariScreenSnapz041.png' alt='text'/> </p> <p> </p> <p> Now one more thing, let’s replace the w and s magic characters with something that looks even more different than plain text: letter boxes. These are actual unicode characters from the <a href='http://en.wikipedia.org/wiki/Enclosed_Alphanumeric_Supplement'>"Unicode Characters in the Enclosed Alphanumeric Supplement Block"</a>. </p> <p> </p> <p> <img src='http://joshondesign.com/images/42874_SafariScreenSnapz042.png' alt='text'/> </p> <p> </p> <p> Now the regex is much easier to read but still compact. However, unless you are really familiar with regex syntax you may still be confused. You still have a bunch of specific symbols and notation to remember. How could we solve this? </p> <p> </p> <h2 id="id48829">Think Bigger</h2> <p> Let’s invent a second notation that is even easier to read. We already have a clue of how this should work. Good programmers write the regex vertically with comments as an adhoc secondary notation. Let’s make it official. </p> <p> If we have two views then the editor could switch between them as desired. When the user clicks on the regex is will expand to something like this: </p> <p> </p> <p> <img src='http://joshondesign.com/images/78563_SafariScreenSnapz044.png' alt='text'/> </p> <p> </p> <p> We get what looks like a tiny spreadsheet where we can edit any term directly, using code completion so we don’t have to remember the exact terms. Furthermore the IDE can show the extra documentation hints only when we are in this "detailed" mode. </p> <p> Even with this though, there is still a problem with regex. Just by looking at it you can’t always tell what it will match and what it won’t. Without a long comment, how do programmers today know what a regex will match. They use their brains. Yes, they actually look at the text and imagine what it will match in their heads. They essentially simulate the computer the regex mentally. This is totally backwards! The whole point of a computer is to simulate things for us, not the other way around. Can’t we make the computer do it’s job?! </p> <p> </p> <h2 id="id70728">Simulate</h2> <p> Instead of simulating the regex mentally, let’s give it some strings and have the computer tell us what will match. Let’s see what that would look like: </p> <p> </p> <p> <img src='http://joshondesign.com/images/20697_SafariScreenSnapz045.png' alt='regex tester'/> </p> <p> </p> <p> </p> <p> We can type in as many examples as we want. Having direct examples also lets us test out the edge cases, which is often where regexes fail. </p> <p> Hmm. You know… this is basically some unit tests for the regex. If we just add another column like this: </p> <p> </p> <p> <img src='http://joshondesign.com/images/5053_SafariScreenSnapz046.png' alt='regex unit tests'/> </p> <p> </p> <p> then we can see at a glance if the regex is working as expected. In this example the last test is failing so it is highlighted in red. </p> <p> Most importantly, the unit tests are right next to the regex in the code; <i>right where they are used</i>. If we collapse the triangle then everything goes away. It’s still there, just hidden, until we need it. </p> <p> This is the power of having a smart IDE and flexible syntax. Because these are only visualization changes it would work with any existing compiler. No new language required. </p> http://joshondesign.com/2014/09/15/regextypo5\nPaper and the Cybernetically Enhanced ProgrammerI’ve talked a lot about ways to improve the syntax and process of writing code. In <a href='http://joshondesign.com/2014/08/22/typopl'>Typographic Programming Language</a>, <a href='http://joshondesign.com/2014/08/25/typopl2'>Fonts</a>, and <a href='http://joshondesign.com/2014/09/02/bar'>Tabs Vs Spaces</a> I've talked about the details of how to improve programming. However, I haven't really talked about my larger vision. Where am I actually going with this? <p> My real goal is to build the <b>Ultimate IDE and Programming Language</b> for solving problems cleanly and simply. Ambitious much? </p> <p> Actually, my real goal is to create the <b>computer from Star Trek</b> (the one with Majel Barrett's voice). </p> <p> Actually, my real goal is to create <b>a cybernetically enhanced programmer</b>. </p> <p> Okay. Let’s back up a bit. </p> <p> </p> <h2 id="id75096">The Big Picture</h2> <p> When we are programming, what are we really doing? What is the essence? Once we figure that out we should be able to work backwards from there. </p> <p> Programming is <i>when you have a problem to solve and you tell the computer how to solve it for you</i>. That’s really it. Everything we do comes back to that fundamental. Functions are ways of encapsulating solving stuff in small chunks so we can reason about it. The same thing with objects (often used to "model" what we are solving). Unit tests are to help verify what we are solving, though it often helps to define the problem as well. At the end of the day it's all to serve the goal of teaching the computer how to solve problems for us. </p> <p> So how could we compare one PL against another? </p> <p> </p> <h2 id="id54582">Metrics</h2> <p> From here out I'm going to use Programming Language, PL, to mean the entire system of IDE, compiler, editor, build tools, etc. The entire system that lets you have the computer solve a problem for you. </p> <p> How can we say that one PL is better than another? By what metric? Well, how about <i>what lets you solve the program the fastest, or the easiest, or with the least headache</i>. That sounds nice and is certainly true, but it's rather subjective. We need something more empirical. </p> <p> Hmm. That's not very helpful. Let's back up again. </p> <p> </p> <h2 id="id75064">Paper and Pencil</h2> <p> What is a pad of paper and a pencil? It’s a thinking device. If you want to add up some numbers you can do it in your head, but after more than a few numbers it becomes tricky to manage them all. So we write them down. We outsource part of the process to some paper. </p> <p> If we want to do division we can do it with the long division process. This is actually not the most efficient way to divide numbers, but it works well because you can have the paper manage all of the state for you. </p> <p> What if you need to remember a list of things to do? You write it down on paper. The paper becomes an extension of your brain. It is <i>a tool for thinking</i>. This perhaps explains some people’s fetish over moleskin style sketch books (not that I would <a href='https://www.kickstarter.com/projects/joeycofone/baron-fig-sketchbooks-and-notebooks-for-thinkers'>ever invest in a Kickstarter for sketchbooks</a>. </p> <p> </p> <h2 id="id58396">A Tool for Thinking</h2> <p> If we think of a programming language as a tool for thinking, then it becomes clear. The PL helps you tell the computer what to do. So a good PL would help you around the parts of programming that are hard: namely keeping state in your head. A bad PL requires you to remember a lot of things. </p> <p> For example, suppose you write a function that calls another function named 'foo'. You must remember what this other function does and what it accepts and what it returns. If the function is named well, say 'increment', then the function name itself helps your brain.You have less to remember because the function name carries information. </p> <p> Now suppose we have type information in our PL. The function increment only takes numbers. Now I don’t have to remember that it only takes numbers; the compiler will enforce it for me. Of course I only find this out when I compile the code. To make the PL give me less to remember the IDE can compile constantly, giving me an immediate warning when I do something bad. Code completion can also help by only suggesting local values that are numbers. </p> <p> With these types of features the PL acts as a tool for thinking. An extension of the brain by holding state. </p> <p> </p> <p> So we can say a PL is “better” if it reduces the cognitive load of the programmer. Things like deterministic performance and runtime speed are nice, but they are (or at least should be) secondary to <i>reducing the cognitive load of the programmer</i>. Of course a program which is too slow to be used is a failure, so ultimately it does matter. However computers get faster. The runtime performance is less of a concern than the quality of the code. Perhaps this explains the resurgence in functional programming styles. The runtime hit matters less than it did 30 years ago, so we can afford to waste cycles on things which reduce the cognitive load of the programmer. </p> <p> </p> <p> </p> <h2 id="id15875">Regex</h2> <p> Now, where were we? Oh, right; the Fire Swamp. </p> <p> Let's look at a more concrete example. Regular expressions. Regexes are powerful. They are concise. But a given regex is <b>not</b> easy to read if you didn’t write it yourself; or even if you did but you didn’t <b>just</b> write it. Look at your regex from 6 months ago sometime. I hope you added good documentation. </p> <p> A regex conveys a lot of information. You have a lot to load up into your brain. When you look at a regex to see what it does you have to start <i>simulating</i> it in your brain. Your brain basically becomes a crappy computer that executes the regex on hypothetical strings. </p> <p> That's crazy. Why are we simulating a state machine in our heads? That’s what we have computers for! <b>To simulate things</b>. We should have the PL show the regex to us in an easier to understand form, or perhaps even multiple forms. With unit tests. And visualizers. Or an embedded regex simulator to show how it works. I have a lot to say about regular expressions in an upcoming blog, but for now I'll just say they are some extremely low hanging fruit in the mission to reduce cognitive load. </p> <p> </p> <p> </p> <h2 id="id27946">Expanding Complexity </h2> <p> Now you might think, if we make a PL which sufficiently reduces cognitive load then programmers will have little to do. Programming will become so easy that anyone could do it. True, this might happen to some degree. In fact, I would argue that letting novices program is actually a good thing (though we would call it problem description rather than programming). However, in general this won’t happen. As we decrease the cognitive load we <i>increase the complexity of the programs we can make</i>. </p> <p> There seems to be a limit to how much complexity the human brain can handle. This limit varies from person to person of course. And it is affected by your health, hunger level, stress, tiredness, etc. (Never code on an empty stomach.) But there is a limit. </p> <p> Historically better tools have reduced the complexity at hand to something below the cognitive limit of the programmer. So what did we do? <b>We tackled more complex tasks</b>. </p> <p> <i>Josh’s first postulate of complexity</i>: <b>Just as data always expands to fill available disk space, programing tasks always increase in complexity to fit our brain budget</b>. </p> <p> Offloading complexity to tools merely allows us to tackle bigger problems. It’s really no different then the brain boost we got from inventing paper and pencil, just separated by a few thousand years. [1] </p> <p> Now I think we can agree that a PL which reduces more cognitive load of the human is better than one which reduces less. That's our metric. So how can we turn this into something actionable? Does it suggest possible improvements to real world programming languages? </p> <p> The answer is yes! If we stop focusing on PL implementations but rather the user experience of the programmer, then many ideas become readily available. Everything I’ve discussed in previous blogs is driven by this core principle. Showing a color or image <i>inline</i> reduces cognitive load because you don’t have to visualize in your head what the color or image actually looks like. The editor can just do it for you. This is just the beginning. </p> <p> </p> <h2 id="id54698">My Forever Project</h2> <p> These are ideas I’ve been working on a long time. In fact I recently realized some of these ideas have been in my head for twenty years. I found evidence of tools I (attempted) to write from my college papers. It’s only recently that everything has started to gel into an expressible form. It’s also only recently that we’ve had the "problem" of computers with too much computational power going to waste. This is my <a href='http://jwb.io/20130122-the-joys-of-having-a-forever-project.html'>Forever Project</a>. </p> <p> There’s so much possibility now. If a notepad and a text editor are our current cybernetic enhancements, what else could we build? Could Google Glass help you with programming? How about an Oculus Rift? Or using an iPad as an extra screen? The answer is Yes! We could definitely use these to reduce cognitive load while interacting with a computer. We just might not call all of these tasks "programming" but they are (or will be shortly as software continues to eat the world). </p> <p> <i>deep breath</i> </p> <p> </p> <p> My concept summarized: Programming systems should not be thought of as ways to make a computer do tricky things. They are ways to make a computer solve problems for you. Problems you would have to do in your head without them (assuming you could do them at all). Thus PLs are <b>tools for thinking</b>. This gives us a metric. Do particular PLs help us think better and solve problems better? As it turns out, most PLs fail miserably by this metric. Or at least they could be a whole lot better. Many ideas from the 60s and 70s still remain unimplemented and unused. </p> <p> But don’t be sad. Our field is less than 100 years old. We are actually doing pretty well. It took a few thousand years to invent suspension bridges and they still don’t work 100% of the time. </p> <p> So from now on let us consider how to make better tools for thinking. Everything <a href='http://worrydream.com'>Bret Victor</a> has been doing, and <a href='http://alarmingdevelopment.org/?p=893'>Jonathan Edwards</a>, and even the work from the 70s & 80s that <a href='http://en.wikipedia.org/wiki/Alan_Kay'>Alan Kay</a> did with Smalltalk and the Dynabook come back to the same thing: <i>building better tools for thinking</i>. With better tools we can think better thoughts. With better tools we can solve bigger problems. </p> <p> So let’s get crackin'! </p> <p> footnote: [1] Before someone calls me on this, I have no idea when paper and pencil were first invented for the purposes of reducing problem complexity. Probably in ancient Egypt for royal bookkeepers. I'll leave that as an exercise to the reader. </p> http://joshondesign.com/2014/09/10/cyberprog\nTabs vs Spaces, the Pointless WarSo far my posts on Typographic Programming have covered <a href='http://joshondesign.com/2014/08/25/typopl2'>font choices</a> and <a href='http://joshondesign.com/2014/08/22/typopl'>formatting</a>. Different ways of rendering the source code itself. I haven’t covered the spacing of the code yet, or more specifically: indentation. Or even more specifically: tabs vs spaces. <p> Put on your asbestos suits, folks. It’s gonna get <b>hot</b> in <i>this</i> kitchen. </p> <p> Traditionally source code has been rendered with a monospace font. This allows for manual horizontal positioning with spaces or tab characters. Of course the tab character doesn’t have a defined width (I’ll explain in a moment why) so flame wars have erupted around spaces vs tabs, on par with the great editor wars of the last century. Ultimately these are pointless arguments. Tabs vs spaces is an artifact of trying to render code into a monospace grid of characters. It’s the 21st century! We can do better than our dad's 1970s terminal. In fact, they did better in the <b>19th century</b>! </p> <p> </p> <h2 id="id1328">In The Beginning</h2> <p> Let’s start at the beginning. Fixed whitespace indenting can be used to line things up so they become pretty, and therefore easier to read. But that's a lot of work. All that pressing of space bars and adjusting when things change. </p> <p> Instead of manually controlling whitespace what if we used tab stops. I don’t mean the tab character, which is mapped to either 4 or 8 spaces, but actual tab stops. Yes. They used to be a real physical thing. </p> <p> <img src='http://i.imgur.com/mi0fg2Q.jpg' alt='image of typewriter tab stop'/> </p> <p> In the olden days, back when we used manual typewriters (I think I was the last high school class to take typing on such machines), there was such a thing as a tabstop. These were vertical brackets along the page (well, along that metal bar at the bottom of the current line). These tiny pieces of metal literally stopped the tabs, thus giving them the name <i>tabstops</i>. We were so creative with names in those days. </p> <p> When you hit the tab key the <i>cursor</i> (a rapidly spinning metal ball imprinted with the noun: “Selectric”) would jump from the left edge of the paper to the first tabstop. Hit tab again and it will go to the next tabstop. Now of course, these tab stops were adjustable, so you could choose the indenting style you wanted for your particular document. </p> <p> Let me repeat that. The tabs stops could be adjusted to the indenting style of <i>your particular document</i>. Inherent is the concept that there is no “one right way”, but rather the format must suit the needs of the particular document, or part of a document, that you are writing. </p> <p> When WYSIWYG editors came along they preserved the notion of a tabstop. They even made it better by giving you nice vertical lines to see the effect of changing the tabstop. When you hit tab the text would move to the stop. If you later move the stop then the text aligned with it will magically move as well. Dynamic tabstops! Yay. We can finally rock like its the 1990s. </p> <p> </p> <p> <img src='http://computersandcomposition.candcblog.org/archives/v10/10_2_html/10_2_8_Boudreau1.gif' alt='Word for Mac, circa 1991'/> </p> <p> </p> <p> </p> <h2 id="id63003">Semantic Indentation</h2> <p> So why do we go back to the 1970s with our text editors? Tabstops are a simple concept for semantically (sorta) indenting our code. Let’s see what some code would look like with simple <i>tabular</i> semantic indenting. </p> <p> Here’s some code with no formatting other than a standard indent. </p> <p> <img src='http://joshondesign.com:3194/images/3391_SafariScreenSnapz030.png' alt='text'/> </p> <p> This is your typical Cish code with brackets and parameters. It would be nice to line up the parameters with their types. The drawRect code is also similar between lines. We should clean that up too. </p> <p> </p> <p> Here is code with semantic indenting. </p> <p> <img src='http://joshondesign.com:3194/images/41902_SafariScreenSnapz031.png' alt='text'/> </p> <p> </p> <p> How would you type in such code? When you hit the tab key the text advances to the next tabstop. these tabstops are dynamic, however. Instead of giving you a line with a ruler at the top, the tabstops automatically expand to fit the text in the column. Essentially they act more like spreadsheet cells than tab stops. </p> <p> Furthermore, the text will be left aligned at the tabstop by default, but right aligned for text that ends with a comma or other special character. This process is completely automatic and hidden, of course. The programmer just hits the tab key and continues typing, the IDE handles all of the formatting details, <i>as it should be</i>. We humans write the important things (the code) and let the computer handle the busy work (formatting). </p> <p> The tab stops (or columns if you think of it as a table) don’t extend the entire document. They only go down as far as the next logical chunk. There could be multiple ways to define a ‘chunk’ semantically, but one indicator would be a double space. If you break the code flow with a double space then it will revert to the document / project wide defaults. This lets us use standard indentation for common structures like functions and flow control bracing, while still allowing for custom indentation when needed. </p> <p> </p> <h2 id="id86177">Ludicrous Speed</h2> <p> </p> <p> Furthermore, using semantic indentation could completely remove the need for braces as block delimiters. Semantic indentation can replace where blocks begin and end. </p> <pre><code>if(x==y) { foo } else { bar }</code></pre> <p> becomes </p> <pre><code>if x==y foo else bar</code></pre> and<pre><code>if(x==y) { foo }</code></pre> can become<pre><code>if x==y foo</code></pre> or even<pre><code>if x==y foo</code></pre> <p> using the tab character. </p> <p> This might be a tad confusing, however, because there is only whitespace between the <code>x==y</code> and the <code>foo</code>. How do we know its not a space instead? If you hit the tab key, which indicates you are going to the next chunk instead of just a long conditional expression, the editor could draw a light glyph where the tab is. Perhaps a rightward unicode arrow. </p> <p> Now I know the Rubyists and Pythoners will say that they've already removed the block delimiters. Quite true, but this goes one step further. </p> <p> Python takes the choice of whitespace away from the programmer, but the programmer still has to implement it. With semantic indentation the entire job of formatting is taken away. You just type your code and the editor does the right thing. Such a system also opens the door for alternative rendering of the code in particular circumstances. </p> <p> </p> <h2 id="id274">Better Fonts</h2> <p> And of course we come to our final advantage. Without manual formatting with spaces we don't need to be restricted to a monospace font anymore. Our code could look like this: </p> <p> <img src='http://joshondesign.com:3194/images/43574_SafariScreenSnapz032.png' alt='text'/> </p> <p> Semantic indenting. Less typing, more readable code. Let’s rock like it’s the 1990s! </p> http://joshondesign.com/2014/09/02/bar\n60 sec review: The Art of Lego Design<b>The Art of LEGO Design: Creative Ways to Build Amazing Models</b> <p> <i>by Jordan Schwartz</i> </p> <p> </p> <p> <a href='http://www.nostarch.com'>No Starch Press</a> is really doubling down on their Lego books. Their latest is a stunner. <a href='http://www.nostarch.com/legodesign'>The Art of Lego Design</a> by Jordan Schwartz is less of an art book and more of a hands on guide. It shows actual techniques used by the Lego artists featured in other No Starch books like Mike Doyle’s <a href='http://www.nostarch.com/beautifullego'>Beautiful Lego</a>. </p> <p> As one of the LEGO Group’s youngest staff designers; <a href='http://www.jrschwartz.com'>Jordan Schwartz</a> worked on a number of official LEGO sets. His attention to detail and gift for teaching really come through in the book. Many of the models that seem impossible at first, such as stained glass windows, are actually pretty simple once explained in the book. </p> <p> <i>The Art of Lego Design</i> is the perfect entry point to rabbit hole that is Lego sculpture. With Lego specific jargon like cheese slopes, SNOT (Studs Not On Top), and the Lowell sphere; it would be easy to get lost; but Jordan explains it all clearly, along with general design principles like texture and composition. </p> <p> The book takes the reader through many styles of construction and technical advantages of different pieces, with as dash of Lego history thrown in. I didn’t know there once was a Lego set with a fuzzy bear rug or a line of big Lego people for the Technic sets. </p> <p> </p> <p> Read or Read Not? <b>Read!</b> </p> <p> <a href='http://www.nostarch.com/legodesign'>Get it from No Starch Press</a> </p> http://joshondesign.com/2014/09/01/legodesign\nTypographic Programming: FontsApparently <a href='http://joshondesign.com/2014/08/22/typopl'>my last post</a> hit <a href='https://news.ycombinator.com/item?id=8214257'>HackerNews</a> and I didn’t know it. That’s what I get for not checking my server logs. <p> Going through the comments I see that about 30% of people find it interesting and 70% think I’m an idiot. That’s much better than usual, so I'm going to tempt fate with another installment. The general interest in my post (and let’s face it, typography for source code is a pretty obscure topic of discussion), spurred me to write a follow up. </p> <p> In today’s episode we’ll tour the font themselves. If we want to reinvent computing it’s not enough to grab a typewriter font and call it a day. We have to plan this carefully, and that starts with a good selection of typefaces. </p> <p> </p> <p> Note that I am <i>not</i> going to use color or boxes or any other visual indicators in this post. Not that they aren’t useful, but today I want to see how far we could get with just font styling: typeface, size, weight, and style (italics, small caps, etc.). </p> <p> </p> <p> </p> <p> Since I'm formatting symbolic code, data, and comments; I need a range of typefaces that work well together. I’ve chosen the Source Pro family from Adobe. It is open source and freely redistributable, with a full set of weights and italics. More importantly, it has three main faces: a monospace font: Source Code Pro, a serif font: Source Serif Pro, and a sans-serif font: Source Sans Pro. All three are specifically designed to work together and have a range of advanced glyphs and features. We won't use many of these features today but they will be nice to have in the future. </p> <p> </p> <p> Lets start formatting some code. This will be a gradual process where we chose the basic formatting then build on top for different semantic chunks. </p> <p> For code itself we will stick with the monospace font: Source Code Pro. I would argue that a fixed width font is <i>not</i> actually essential for code, but that’s an argument for another day when we look at indentation. For today we’ll stick with fixed width. </p> <p> </p> <p> Code comments and documentation will use Source Serif Pro. Why? Well, comments don’t need the explicit alignment of a monospace font, so definitely not Source Code Pro. Comments are prose. The sans serif font would work okay but for some reason when I think "text" I think of serifs. It feels more like prose. More <i>texty</i>. </p> <p> So I won’t use Source Sans Pro today but I <i>will</i> save it for future use. Using the Source [x] Pro set gives us that option. </p> <p> Below is a simple JavaScript function set with the default weights of those two fonts. This is the base style we will work from. </p> <p> </p> <p> <img src='http://joshondesign.com:3194/images/38720_SafariScreenSnapz025.png' alt='text'/> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> So that’s a good start but.., I can immediately think of a few improvements. Code (at least in C derived languages) has five main elements: comments, keywords, symbols, literals, and miscellaneous — or what I like to call ‘extraneous cruft’. It’s mainly parenthesis and brackets for delimiting functions and procedure bodies. It is <i>possible</i> to design a language which uses ordering to reduce the need for delimiters, or to be rid of them completely with formatting conventions (as I talked about last week). However, today’s job is to just restyle without changing the code so let’s leave them unmolested for now. Instead we will minimize their appearance by setting them in a thin weight. (All text is still in black, though). </p> <p> Next up is symbols. Symbols the part of a program that the programmer can change. These are arguably the most important part of the program; the parts we spend the most time thinking about, so let’s make them stand out with a very heavy weight: bold 700. </p> <p> </p> <p> <img src='http://joshondesign.com:3194/images/28447_SafariScreenSnapz026.png' alt='text'/> </p> <p> </p> <p> </p> <p> Better, but I don’t like how the string literal blends in to with the rest of the code. String literals are almost like prose so let’s show them in serif type, this time with a bolder weight and shrunk a tiny bit (90% of normal). </p> <p> For compatibility I did the same with numeric literals. I’m not sure if ‘null’ is really a literal or a symbol, but you can assign values to it so I’ll call it a literal. </p> <p> <img src='http://joshondesign.com:3194/images/34373_SafariScreenSnapz027.png' alt='text'/> </p> <p> </p> <p> Next up is keywords. Keywords are the part of the language that the programmer <i>cannot</i> change. They are strictly defined by the language and completely reserved. Since they are immutable it doesn’t really matter how we render them. I could use a smiley face for the <code>function</code> keyword and the compiler wouldn’t care. It always evaluates to the same thing. However, unlike my 3yr old’s laptop, I don’t have a smiley face key on my computer; so let’s keep the same spelling. I do want to do something unorthodox though. Let’s put the keywords in <i>small caps</i>. </p> <p> </p> <p> Small caps are glyphs the size of lower case letters, but drawn like the upper case letters. To do small caps right you can’t just put your text in upper case and shrink it down. It would look strange. Small caps are actually different glyphs designed to have a similar (but not identical) width and a shorter height. They are hard to generate programmatically. This is one place where a human has to do the hard work. Fortunately we have small caps at our disposal thanks to the great contributions by <a href='http://blog.typekit.com/2012/11/02/source-sans-pro-adoption-and-development-to-date/'>Logos and type designer Marc Weymann</a>. Open source is a good thing. </p> <p> </p> <p> <img src='http://joshondesign.com:3194/images/77897_SafariScreenSnapz028.png' alt='text'/> </p> <p> </p> <p> </p> <p> Now we are getting somewhere. Now the code has a <b>dramatically</b> different feel. </p> <p> </p> <p> There’s one more thing to address: the variables. Are they symbols like function names? Yes, but it feels different than function names. They are also not usually prefixed with a parent object or namespace specifier. Really we have three cases. A fully qualified symbol like ‘baz’ in foo.bar.baz, the prefix part (foo.bar), and standalone variables that aren’t qualified at all (like ‘x’). This distinction applies whether or not the symbol is a function or an object reference (it could actually be both in JavaScript). </p> <p> </p> <p> In the end I decided these cases are related but distinct. Standalone symbols have a weight of 400. Technically this is the default weight in CSS and shouldn’t appear to be ‘bold’, but since the base font is super light, regular will feel heavier against it. The symbol at the end of a qualifier chain will also be bold, but with a weight of 700. Finally the prefix part will be italics to further distinguish it. There really isn’t a right answer here; other combinations would work equally well, so I just played around until I found something that felt right. </p> <p> This is the final version: </p> <p> <img src='http://joshondesign.com:3194/images/81556_SafariScreenSnapz029.png' alt='text'/> </p> <p> </p> <p> </p> <p> I also shrunk the comments to 80%. Again it just felt right, and serifed fonts are easier to read in longer lines, so the comments can handle the smaller size. </p> <p> </p> <p> Here’s a <a href='http://joshondesign.com/p/demos/typopl/test2.html'>link to the live mockup</a> in HTML and CSS. This design turned out much better than I originally thought it would. We can do a lot without color and spacing changes. Now imagine what we could do will our full palette of tools. But that will have to wait for next time. </p> <p> </p> <p> BTW, if you submit this to Hacker News or Reddit please let me know <a href='https://twitter.com/joshmarinacci'>via Twitter</a> so I can answer questions. </p> http://joshondesign.com/2014/08/25/typopl2\nTypographic Programming LanguageAllow me to present a simple thought experiment. Suppose we didn’t need to store our code as ASCII text on disk. Could we change the way we write -- and more importantly <i>read</i> -- symbolic code? Let’s assume we have a magic code editor which can read, edit, and write anything we can imagine. Furthermore, assume we have a magic compiler which can work with the same. What would the ideal code look like? <p> Well, first we could get rid of delimiters. Why do we even have them? Our sufficiently stupid compilers. </p> <p> Delimiters like quotes are there to let the compiler know when a symbol ends and a literal begins. That’s also why variables can’t start with a number; the compiler wouldn’t know if you meant a variable name or a numeric literal. What if we could distinguish between then using typography instead: </p> <p> Here’s an example. </p> <p> <img src='http://joshondesign.com:3194/images/64218_SafariScreenSnapz013.png' alt='text'/> </p> <p> This example is semantically equivalent to: </p> <pre><code>print "The cats are hungry."; //no quotes or parens are needed</code></pre> <p> Rendering the literal inside a special colored box makes it more readable than the plain text version. We live in the 21st century. We have more typographic options than quotes! Let’s use them. A green box is but one simple option. </p> <p> Let’s take the string literal example further: </p> <p> <img src='http://joshondesign.com:3194/images/35718_SafariScreenSnapz014.png' alt='text'/> </p> <p> Without worrying about delimiting the literals we don’t need extra operators for concatenation; just put them inline. In fact, there becomes no difference between string concatenation and variable interpolation. The only difference is how we choose to render them on screen. Number formatting can be shown inline as well, but visually separate by putting the format control in a gray box. </p> <p> <img src='http://joshondesign.com:3194/images/28095_SafariScreenSnapz017.png' alt='text'/> </p> <p> </p> <p> Also notice that comments are rendered in an entirely different font, and pushed to the side (with complete Unicode support, of course). </p> <p> Once we’ve gone down the road of showing string literals differently we could do the same with numbers. </p> <p> <img src='http://joshondesign.com:3194/images/49154_SafariScreenSnapz018.png' alt='text'/> </p> <p> </p> <p> </p> <p> Operators are still useful of course, but only when they represent a real mathematical operation. It turns out there is a separate glyph for multiplication, it’s not just an x, but it’s still visually confusing. Maybe a proper dot would be better. </p> <p> Since some numbers are actually units, this hypothetical language would need separate types for those units. They could be rendered as prose by using common metric abbreviations. </p> <p> <img src='http://joshondesign.com:3194/images/58114_SafariScreenSnapz019.png' alt='text'/> </p> <p> </p> <p> In a sense, a number with units really is a different thing than plain numbers. It’s nice for the programming language to recognize that. </p> <p> As long as we are going to have special support for numeric and string literals, why not go all the way: </p> <p> Color literals </p> <p> <img src='http://joshondesign.com:3194/images/87674_SafariScreenSnapz020.png' alt='text'/> </p> <p> </p> <p> Image literals </p> <p> <img src='http://joshondesign.com:3194/images/37593_SafariScreenSnapz021.png' alt='text'/> </p> <p> </p> <p> Byte arrays </p> <p> <img src='http://joshondesign.com:3194/images/26240_SafariScreenSnapz022.png' alt='text'/> </p> <p> </p> <p> If our IDEs really understood the concepts represented in the language then we could write highly visual but still symbolic code. If only our compilers were <a href='http://c2.com/cgi/wiki?SufficientlySmartCompiler '>sufficiently smart</a>. </p> <p> </p> <p> I’m not saying we should actually code this way but thought experiments are a good way to find new ideas; ideas that we could then apply to our existing programming systems. </p> http://joshondesign.com/2014/08/22/typopl\nBuilding a Headline Viewer with AminoThis is part 3 of a series on <a href='https://github.com/joshmarinacci/aminogfx'>Amino</a>, a JavaScript graphics library for OpenGL on the Raspberry PI. You can also read <a href='http://joshondesign.com/2014/08/08/aminorefactored'>part 1</a> and <a href='http://joshondesign.com/2014/08/10/aminoslideshow'>part 2</a>. <p> Amino is built on <a href='http://nodejs.org'>Node JS</a>, a robust JavaScript runtime married to a powerful IO library. That’s nice and all, but the real magic of Node is the modules. For any file format you can think of someone has probably written a Node module to parse it. For any database you might want use, someone has made a module for it. <a href='https://www.npmjs.org'>npmjs.org</a> lists nearly ninety thousand packages! That’s a lot of modules ready for you to use. </p> <p> For today’s demo we will build a nice rotating display of news headlines that could run in the lobby of an office using a flatscreen TV on the wall. It will look like this: </p> <p> <img src='https://dl.dropbox.com/s/s9kkajq9axygafn/rssfeed2.png?raw=1' alt='image'/> </p> <p> </p> <p> We will fetch news headlines as RSS feeds. Feeds are easy to parse using Node streams and the <code>feedparser</code> module. Lets start by creating a <code>parseFeed</code> function. This function takes a url. It will load the feed from the url, extract the title of each article, then call the provided callback function with the list of headlines. </p> <pre><code>var FeedParser = require('feedparser'); var http = require('http'); function parseFeed(url, cb) { var headlines = []; http.get(url, function(res) { res.pipe(new FeedParser()) .on('meta',function(meta) { //console.log('the meta is',meta); }) .on('data',function(article) { console.log("title = ", article.title); headlines.push(article.title); }) .on('end',function() { console.log("ended"); cb(headlines); }) }); }</code></pre> <p> Node uses <i>streams</i>. Many functions, like the <code>http.get()</code> function, return a stream. You can pipe this stream through a filter or processor. In the code above we use the <code>FeedParser</code> object to filter the HTTP stream. This returns a new stream which will produce events. We can then listen to the events as the data flows through the system, picking up just the parts we want. In this case we will watch for the <code>data</code> event, which provides the article that was just parsed. Then we add just the title to the <code>headlines</code> array. When the <code>end</code> event happens we send the headlines array to the callback. This sort of streaming IO code is very common in Node programs. </p> <p> Now that we have a list of headlines lets make a display. We will hard code the size to 1280 x 720, a common HDTV resolution. Adjust this to fit your own TV if necessary. As before, the first thing we do is turn the titles into a CircularBuffer (see <a href='http://joshondesign.com/2014/08/10/aminoslideshow'>previous blog</a> ) and create a root group. </p> <pre><code>var amino = require('amino.js'); var sw = 1280; var sh = 720; parseFeed('http://www.npr.org/rss/rss.php?id=1001',function(titles) { amino.start(function(core, stage) { var titles = new CircularBuffer(titles); var root = new amino.Group(); stage.setSize(sw,sh); stage.setRoot(root); …</code></pre> <p> The RSS feed will be shown as two lines of text, so let’s create a text group then two text objects. Also create a background group to use later. Shapes are drawn in the order they are added, so we have to add the <code>bg</code> group <b>before</b> the textgroup. </p> <pre><code> var bg = new amino.Group(); root.add(bg); var textgroup = new amino.Group(); root.add(textgroup); var line1 = new amino.Text().x(50).y(200).fill("#ffffff").text('foo').fontSize(80); var line2 = new amino.Text().x(50).y(300).fill("#ffffff").text('bar').fontSize(80); textgroup.add(line1,line2);</code></pre> <p> Each Text object has the same position, color, and size except that one is 100 pixels lower down on the screen than the other. Now we need to animate them. </p> <p> The animation consists of three sections: set the text to the current headline, rotate the text in from the side, then rotate the text back out after a delay. </p> <p> In the <code>setHeadlines</code> function; if the headline is longer than the max we support (currently set to 34 letters) then chop it into pieces. If we were really smart we’d be careful about not breaking words, but I’ll leave that as an exercise to the reader. </p> <pre><code> function setHeadlines(headline,t1,t2) { var max = 34; if(headline.length > max) { t1.text(headline.substring(0,max)); t2.text(headline.substring(max)); } else { t1.text(headline); t2.text(''); } }</code></pre> <p> The <code>rotateIn</code> function calls <code>setHeadlines</code> with the next title, then animates the Y rotation axis from 220 degrees to 360 over two seconds (2000 milliseconds). It also triggers <code>rotateOut</code> when it’s done. </p> <pre><code> function rotateIn() { setHeadlines(titles.next(),line1,line2); textgroup.ry.anim().from(220).to(360).dur(2000).then(rotateOut).start(); }</code></pre> <p> A quick note on rotation. Amino is fully 3D so in theory you can rotate shapes in any direction, not just in the 2D plane. To keep things simple the <code>Group</code> object has three rotation properties: <code>rx</code>, <code>ry</code>, and <code>rz</code>. These each rotate <b>around</b> the x, y, and z axes. The x axis is horizontal and fixed to the top of the screen, so rotating around the x axis would flip the shape from top to bottom. The y axis is vertical and on the left side of the screen. Rotating around the y axis flips the shape left to right. If you want to do a rotation that looks like the standard 2D rotation, then you want to go around the Z axis with <code>rz</code>. Also note that all rotations are in <i>degrees</i>, not radians. </p> <p> The <code>rotateOut()</code> function rotates the text group back out from 0 to 140 over two seconds, then triggers <code>rotateIn</code> again. Since each function triggers the other they will continue to ping pong back and forth forever, pulling in a new headline each time. Notice the <code>delay()</code> call. This will make the animation wait five seconds before starting. </p> <pre><code> function rotateOut() { textgroup.ry.anim().delay(5000).from(0).to(140).dur(2000).then(rotateIn).start(); }</code></pre> <p> Finally we can start the whole shebang off back calling rotateIn the first time. </p> <pre><code> rotateIn();</code></pre> <p> What we have so far will work just fine but it’s a little boring because the background is pure black. Let’s add a few subtly moving rectangles in the background. </p> <p> First we will create the three rectangles. They are each fill the screen and are 50% translucent, in the colors red, green, and blue. </p> <pre><code> //three rects that fill the screen: red, green, blue. 50% translucent var rect1 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#ff0000"); var rect2 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#00ff00"); var rect3 = new amino.Rect().w(sw).h(sh).opacity(0.5).fill("#0000ff"); bg.add(rect1,rect2,rect3);</code></pre> <p> Now let’s move the two back rectangles off the left edge of the screen. </p> <pre><code> //animate the back two rects rect1.x(-1000); rect2.x(-1000);</code></pre> <p> Finally we can slide them from left to right and back. Notice that these animations set <code>loop</code> to -1 and <code>autoreverse</code> to 1. The loop count sets how many times the animation will run. Using <code>-1</code> makes it run forever. The autoreverse property makes the animation alternate direction each time. Rather than going from left to right and starting over at the left again, instead it will go left to right then right to left. Finally the second animation has a five second delay. This staggers the two animations so they will always be in different places. Since all three rectangles are translucent the colors will continually mix and change as the rectangles slide back and forth. </p> <pre><code> rect1.x.anim().from(-1000).to(1000).dur(5000) .loop(-1).autoreverse(true).start(); rect2.x.anim().from(-1000).to(1000).dur(3000) .loop(-1).autoreverse(true).delay(5000).start();</code></pre> <p> Here’s what it finally looks like. Of course a still picture can’t do justice to the real thing. </p> <p> <img src='https://www.dropbox.com/s/j198efaumm873kg/rssfeed.png?raw=1' alt='image'/> </p> <p> </p> <p> <img src='https://dl.dropbox.com/s/s9kkajq9axygafn/rssfeed2.png?raw=1' alt='image'/> </p> http://joshondesign.com/2014/08/19/amino3\nPhoto Slideshow in AminoThis is the second blog in a series about Amino, a Javascript OpenGL library for the Raspberry Pi. The first post is <a href='http://joshondesign.com/2014/08/08/aminorefactored'>here</a>. <p> This week we will build a digital photo frame. A Raspberry PI is perfect for this task because it plugs directly into the back of a flat screen TV through HDMI. Just give it power and network and you are ready to go. </p> <p> Last week I talked about the new Amino API built around properties. Several people commented that I didn’t say how to actually get and run Amino. Very good point! Let’s kick things off with an install-fest. These instructions assume you are running Raspbian, though pretty much any Linux distro should work. </p> <p> Amino is fundamentally a Node JS library so first you’ll need Node itself. Fortunately, installing Node is far easier than it used to be. In brief, update your system with <code>apt-get</code>, download and unzip Node from <code>nodejs.org</code>, and add <code>node</code> and <code>npm</code> to your path. Verify the installation with <code>npm —version</code>. I wrote full instructions <a href='http://joshondesign.com/2013/10/23/noderpi'>here</a> </p> <p> Amino uses some native code, so you’ll need <b>Node Gyp</b> and <b>GCC</b>. Verify GCC is installed with <code>gcc —version</code>. Install node-gyp with <code>npm install -g node-gyp</code>. </p> <p> Now we can checkout and compile Amino. You’ll also need Git installed if you don’t have it. </p> <pre><code>git clone git@github.com:joshmarinacci/aminogfx.git cd aminogfx node-gyp clean configure --OS=raspberrypi build npm install node build desktop export NODE_PATH=build/desktopnode tests/examples/simple.js</code></pre> <p> This will get the amino source, build the native parts, then build the Javascript parts. When you run <code>node tests/examples/simple.js</code> you should see this: </p> <p> <img src='https://dl.dropbox.com/s/7ls6p2z80wyrwlu/simple_circle.png' alt='image'/> </p> <p> </p> <p> </p> <p> Now let’s build a photo slideshow. The app will scan a directory for image, then loop through the photos on screen. It will slide the photos to the left using two ImageViews, one for the outgoing image and one for the incoming, then swap them. First we need to import the required modules. </p> <pre><code>var amino = require('amino.js'); var fs = require('fs'); var Group = require('amino').Group; var ImageView = require('amino').ImageView;</code></pre> <p> Technically you could call <code>amino.Group()</code> instead of importing <code>Group</code> separately, but it makes for less typing later on. </p> <p> Now let’s check that the user specified an input directory. If so, then we can get a list of images to load. </p> <pre><code>if(process.argv.length &lt; 3) { console.log("you must provide a directory to use"); return; } var dir = process.argv[2]; var filelist = fs.readdirSync(dir).map(function(file) { return dir+'/'+file; });</code></pre> <p> So far this is all straight forward Node stuff. Since we are going to loop through the photos over and over again we will need an index to increment through the array. When the index reaches the end it should wrap around to the beginning, and handle the case when new images are added to the array. Since this is a common operation I created a utility object with a single function: <code>next()</code>. Each time we call <code>next</code> it will return the next image, automatically wrapping around. </p> <pre><code>function CircularBuffer(arr) { this.arr = arr; this.index = -1; this.next = function() { this.index = (this.index+1)%this.arr.length; return this.arr[this.index]; } } //wrap files in a circular buffer var files = new CircularBuffer(filelist);</code></pre> <p> Now lets set up a scene in Amino. To make sure the threading is handled properly you must always pass a setup function to <code>amino.start()</code>. It will set up the graphics system then give you a reference to the <code>core</code> object and a <code>stage</code>, which is the window you can draw in. (Technically it’s the contents of the window, not the window itself). </p> <pre><code>amino.start(function(core, stage) { stage.setSize(800,600); var root = new Group(); stage.setRoot(root); var sw = stage.getW(); var sh = stage.getH(); //create two image views var iv1 = new ImageView().x(0); var iv2 = new ImageView().x(1000); //add to the scene root.add(iv1,iv2); … };</code></pre> <p> The setup function above sets the size of the stage and creates a Group to use as the root of the scene. Within the root it adds two image views, <code>iv1</code> and <code>iv2</code>. </p> <p> The images may not be the same size as the screen so we must scale them. However, we can only scale once we know how big the images will be. Furthermore, the image view will hold different images as we loop through them, so we really want to recalculate the scale every time a new image is set. To do this, we will watch for changes to the image property of the image view like this. </p> <pre><code> //auto scale them with this function function scaleImage(img,prop,obj) { var scale = Math.min(sw/img.w,sh/img.h); obj.sx(scale).sy(scale); } // call scaleImage whenever the image property changes iv1.image.watch(scaleImage); iv2.image.watch(scaleImage); //load the first two images iv1.src(files.next()); iv2.src(files.next());</code></pre> <p> Now that we have two images we can animate them. Sliding images to the left is as simple as animating their <code>x</code> property. This code will animate the x position of <code>iv1</code> over 3 seconds, starting at <code>0</code> and going to <code>-sw</code>. This will slide the image off the screen to the left. </p> <pre><code>iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start();</code></pre> <p> To slide the next image onto the screen we do the same thing for iv2, </p> <pre><code>iv2.x.anim().delay(1000).from(sw).to(0).dur(3000)</code></pre> <p> However, once the animation is done we want to swap the images and move them back, so let’s add a <code>then(afterAnim)</code> function call. This will invoke <code>afterAnim</code> once the second animation is done. The final call in the chain is to the <code>start()</code> function. Until <code>start</code> is called nothing will actually be animated. </p> <pre><code> //animate out and in function swap() { iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start(); iv2.x.anim().delay(1000).from(sw).to(0).dur(3000) .then(afterAnim).start(); } //kick off the loop swap();</code></pre> <p> The <code>afterAnim</code> function moves the ImageViews back to their original positions and moves the image from <code>iv2</code> to <code>iv1</code>. Since this happens between frames the viewer will never notice anything has changed. Finally it sets the source of <code>iv2</code> to the next image and calls the <code>swap()</code> function to loop again. </p> <pre><code> function afterAnim() { //swap images and move views back in place iv1.x(0); iv2.x(sw); iv1.image(iv2.image()); // load the next image iv2.src(files.next()); //recurse swap(); }</code></pre> <p> A note on something a bit subtle. The <code>src</code> of an image view is a string, either a url of file path, which refers to the image. The <code>image</code> property of an ImageView is the actual in memory image. When you set the <code>src</code> to a new value the ImageView will automatically load it, then set the <code>image</code> property. That’s why we added a watch function to the <code>iv1.image</code> not <code>iv1.src</code>. </p> <p> Now let’s run it, the last argument is the path to a directory containing some JPGs or PNGs. </p> <pre><code>node demos/slideshow/slideshow.js demos/slideshow/images</code></pre> <p> If everything goes well you should see something like this: </p> <p> <img src='https://dl.dropbox.com/s/ctarisvabzp98mt/amino_yosemite_slideshow.png' alt='image'/> </p> <p> By default, animations will use a cubic interpolator so the images will start moving slowly, speed up, then slow down again when they reach the end of the transition. This looks nicer than a straight linear interpolation. </p> <p> So that’s it. A nice smooth slideshow in about 80 lines of code. By removing comments and utility functions we could get it under 40, but this longer version is easier to read. </p> <p> Here is the final complete code. It is also in the git repo under demos/slideshow. </p> <pre><code>var amino = require('amino.js'); var fs = require('fs'); var Group = require('amino').Group; var ImageView = require('amino').ImageView; if(process.argv.length &lt; 3) { console.log("you must provide a directory to use"); return; } var dir = process.argv[2]; var filelist = fs.readdirSync(dir).map(function(file) { return dir+'/'+file; }); function CircularBuffer(arr) { this.arr = arr; this.index = -1; this.next = function() { this.index = (this.index+1)%this.arr.length; return this.arr[this.index]; } } //wrap files in a circular buffer var files = new CircularBuffer(filelist); amino.start(function(core, stage) { stage.setSize(800,600); var root = new Group(); stage.setRoot(root); var sw = stage.getW(); var sh = stage.getH(); //create two image views var iv1 = new ImageView().x(0); var iv2 = new ImageView().x(1000); //add to the scene root.add(iv1,iv2); //auto scale them function scaleImage(img,prop,obj) { var scale = Math.min(sw/img.w,sh/img.h); obj.sx(scale).sy(scale); } iv1.image.watch(scaleImage); iv2.image.watch(scaleImage); //load the first two images iv1.src(files.next()); iv2.src(files.next()); //animate out and in function swap() { iv1.x.anim().delay(1000).from(0).to(-sw).dur(3000).start(); iv2.x.anim().delay(1000).from(sw).to(0).dur(3000) .then(afterAnim).start(); } swap(); function afterAnim() { //swap images and move views back in place iv1.x(0); iv2.x(sw); iv1.image(iv2.image()); iv2.src(files.next()); //recurse swap(); } });</code></pre> <p> Thank you and stay tuned for more Amino examples on my blog. </p> <p> <a href='https://github.com/joshmarinacci/aminogfx'>Amino repo</a> </p> http://joshondesign.com/2014/08/10/aminoslideshow\nAmino: RefactoredI’ve been working on <a href='https://github.com/joshmarinacci/aminolang'>Amino</a>, my graphics library, for several years now. I’ve ported it from pures Java, to JavaScript, to a complex custom-language generator system (I was really into code-gen two years ago), and back to JS. It has accreted features and bloat. And yet, through all that time, even with blog posts and the <a href='http://goamino.org/'>goamino.org</a> website, I don’t think anyone but me has ever used it. I had accepted this fact and continued tweaking it to meet my personal needs; satisfied that I was creating something that lets me build other useful things. Until earlier this year. <p> </p> <h3 id="id99617">OSCON</h3> <p> In January I thought to submit a session to OSCON entitled <a href='http://www.oscon.com/oscon2014/public/schedule/detail/34535'>Data Dashboards with Amino, NodeJS, and the Raspberry Pi</a>. The concept was simple: Raspberry Pis are cheap but with a surprisingly powerful GPU. Flat screen TVs are also cheap. I can get a <a href='http://www.costco.com/Vizio-32%22-Class-720P-LED-HDTV-E320-B0.product.100089402.html'>32in model at Costco</a> for 200$. Combine them with a wall mount and you have a cheap data dashboard. Much to my shock the talk was accepted. </p> <p> The session at OSCON was very well attended, proving there is clearly interest in Amino, at least on the Raspberry Pi. The demos I was able to pull off for the talk show that Amino is powerful enough to really push the Pi. My final example was an over the top futuristic dashboard for 'Awesomonium Levels', clearly at home in every super villain’s lair. If Amino can pull this off then it’s found it’s niche. X windows and browsers are so slow on the Pi that people are willing to use something different. </p> <p> <img src='https://dl.dropboxusercontent.com/s/2woolscwfrlbigv/globe_super.png' alt='globe'/> </p> <p> </p> <h3 id="id35359">Refactoring</h3> <p> However, Amino still needs some work. While putting the demos together for my session a noticed how inefficient the API is. I’ve been working on Amino in various forms for at least 3 years, so the API patterns were set quite a while ago. The objects full of getters and setters clearly reflect my previous Java background. Not only have I improved my Javascript skills since then, I have read a lot about functional programming styles lately (book reports coming soon). This let me finally see ways to improve the API. </p> <p> Much like any other UI toolkit, the core of the Amino API has always been a tree of nodes. Architecturally there are actually two trees, the Javascript one you interact with and the native one that actually makes the OpenGL calls; however the native one is generally hidden away. </p> <p> Since I came from a Java background I started with an object full of properties accessed with getters and setters. While this works, the syntax is annoying. You have to type the extra get/set words and remember to camel case the property names. Is the font name set with setFontName or setFontname? Because the getter and setter functions were just functions there was no place to access the property as a single object. This means other property functions have to be controlled with a separate API. To animate a property you must refer to it indirectly with a string, like this: </p> <pre><code>var rect = new amino.ProtoRect().setX(5); var anim = core.createPropAnim(rect,’x’,0,100,1000);</code></pre> <p> Not only is the animation configured through a separate object (core) but you have to remember the exact order of the various parameters for starting and end values, duration, and the property name. Amino needs a more fluent API. </p> <p> </p> <h3 id="id381">Enter Super Properties</h3> <p> After playing with Javascript native getters and setters for a bit (which I’ve determined have no real use) I started looking at the JQuery API. A property can be represented by a single function which both gets and sets depending on the arguments. Since functions in Javascript are also objects, we can attach extra functionality to the function itself. Magic behavior like binding and animation. The property itself becomes the natural place to put this behavior. I call these super properties. Here’s what they look like. </p> <p> </p> <p> To get the x property of a rectangle </p> <pre><code>rect.x()</code></pre> <p> to set the x property of a rectangle: </p> <pre><code>rect.x(5);</code></pre> <p> the setter returns a reference to the parent object, so super properties are chain able: </p> <pre><code>rect.x(5).y(5).w(5);</code></pre> <p> This syntax is already more compact than the old one: </p> <pre><code>rect.setX(5).setY(5).setWidth(5);</code></pre> <p> The property accessor is also an object with it’s own set of listeners. If I want to know when a property changes I can <i>watch</i> it like this: </p> <pre><code>rect.x.watch(function(val) { console.log(“x changed to “+val); });</code></pre> <p> Notice that I am referring to the accessor as an object <b>without</b> the parenthesis. </p> <p> Now that we can watch for variable changes we can also bind them together. </p> <pre><code>rect.x.bindto(otherRect.x);</code></pre> <p> If we combine binding with a modifier function, then properties become very powerful. To make rect.x always be the value of otherRect.x plus 10: </p> <pre><code>rect.x.bindto(otherRect.x, function(val) { return val + 10; });</code></pre> <p> Modifier functions can be used to convert types as well. Let’s format a string based on a number: </p> <pre><code>label.text.bindto(rect.x, function(val) { return “The value is “ + val; });</code></pre> <p> Since Javascript is a functional language we can improve this syntax with some meta-functions. This example creates a reusable string formatter. </p> <pre><code>function formatter(str) { return function(val) { return str.replace(‘%’,val); } } label1.text.bindto(rect.x, formatter(‘the x value is %’)); label2.text.bindto(rect.y, formatter(‘the y value is %’));</code></pre> <p> Taking a page out of JQuery’s book, I added a find function to the Group object. It returns a selection with proxies the properties to the underlying objects. This lets me manipulate multiple objects at once. </p> <p> Suppose I have a group with ten rectangles. Each has a different position but they should all be the same size and color: </p> <pre><code>group.find(‘Rect’).w(20).h(30).fill(‘#00FF00’);</code></pre> <p> Soon Amino will support searching by CSS class and ID selectors. </p> <p> </p> <h3 id="id96808">Animation</h3> <p> Lets get back to animation for a second. The old syntax was like this: </p> <pre><code>var rect = new amino.ProtoRect().setX(5); var anim = core.createPropAnim(rect,’x’,0,100,1000);</code></pre> <p> Here is the new syntax: </p> <pre><code>var rect = new Rect().x(5); rect.x.anim().from(0).to(100).dur(1000).start();</code></pre> <p> We don’t need to pass in the object and property because the <code>anim</code> function is already attached to the property itself. Chaining the functions makes the syntax more natural. The <code>from</code> and <code>dur</code> functions are optional. If you don’t specifiy them the animation will start with the current value of the property (which is usually what you wanted anyway) and a default duration (1/4 sec). Without those it looks like: </p> <pre><code>rect.x.anim().to(100).start();</code></pre> <p> Using a start function makes the animation behavior more explicit. If you don’t call <code>start</code> then the animation doesn’t start. This lets you set up and save a reference to the animation to be used later. </p> <pre><code>var anim = rect.x.anim().from(-100).to(100).dur(1000).loop(5); //some time later anim.start();</code></pre> <p> Delayed start also lets us add more complex animation control in the future, like chained or parallel animations: </p> <pre><code>Anim.parallel([ rect.x.anim().to(1000), circle.radius.anim().to(50), group.y.anim().from(50).to(100) ]).start();</code></pre> <p> I’m really happy with the new syntax. Simple functions built on a common pattern of objects and super properties. Not only does this make a nicer syntax, but the implementation is cleaner as well. I was able to deleted about a third of Amino’s JavaScript code! That’s an all-round win! </p> <p> Since this changes Amino so much I’ve started a new repo for it. The old amino is still available at: </p> <p> <a href='https://github.com/joshmarinacci/aminolang'>https://github.com/joshmarinacci/aminolang</a> </p> <p> The new amino, the only one you should be using, is at: </p> <p> <a href='https://github.com/joshmarinacci/aminogfx'>https://github.com/joshmarinacci/aminogfx</a> </p> <p> </p> <p> That’s it for today. Next time I’ll show you how to build a full screen photo slideshow with the new animation API and a circular buffer. After that we’ll dig into 3D geometry and make a cool spinning globe. </p> http://joshondesign.com/2014/08/08/aminorefactored