WebGL is finally ready for prime time – watch everything change! – Part 1 of 4

WebGL_1500

If you see a spinning Pebble smartwatch above you are using a WebGL enabled browser!  Otherwise you’ve fallen back to a precomputed spinnable 360 degrees format which isn’t even close….

With the latest round of WebGL supported browsers and the hard push towards optimized JavaScript compilers, JavaScript “assembly” libraries that play well with optimizing compilers(asm.js), support for compilers that generate asm.js compliant JavaScript (LLVM with Emscripten), and direct support for LLVM virtual machines and even totally safe sand-boxed code execution (NaCL in chrome only) we are FINALLY ready for a 3D web. And much more than that – these technologies will signal the death knell of traditional OS specific apps….. again FINALLY!

The History from the Standpoint of Applications and the Web

If you are inpatient or pressed for time you can skip down past this – but having a feel for the background really makes the appreciation sink in.  The history with respect to mobile devices is in part 2, web 3d is coming up in part 3 and part 4 will wrap things up by explaining how WebGL on browsers should pull all app development into its fold and leave the other targets as second thoughts.

Let’s start at the beginning (where most things usually start) with the schizophrenic, 1.5 steps forward – 1.2 steps back nature of the computer world where the ecosystem can change over night – a Darwinian process that works well in the end but is far from optimal with respect to efficiency. In the beginning everything was proprietary to processors…. great job security for developers. Some reasonable guys created C to be “a portable machine language” and for a while things moved closer to the write once, modify for various targets and be good to go model. A useful orthogonal model of development – 1.5 steps forward. And remember these systems were very different under the hood – but conveniently were mostly crunching data and spitting out text. Over time things exploded as essential proprietary libraries were created for various targets that broke many of the most useful elements of this paradigm – only 0.5 steps back. These systems did in fact have different capabilities, processors and memory footprints and so a universal abstraction layer couldn’t be generated in many cases. So this did make some sense.

As we enter more modern times – hardware began to be nearly ubiquitous across operating systems – you probably had the same hardware underneath whether you were running Windows, Unix (including the Mac, being Unix of course), Linux, etc. However operating systems that implemented a large number of the same applications (everyone wanted Photoshop to run the same way on all targets) were proprietary in nature with respect to their code. Due to the power of the fast new machines, code could be written with abstraction layers that made it possible a more write once and build for multiple targets; however many programmers still chose the easier, more feature rich approach of writing code specifically for each target. For companies that did nothing but port this was a cash cow.

When the internet took off the ultimate force multiplier for a homogeneous, ubiquitous development abstraction layer was in motion – oddly disguised as a hypertext viewer.  In throws and spurts thanks primarily to Microsoft compatibility games (in Internet Explorer, Java etc.) getting the web experience to be reliable and consistent was a process of never-ending testing.  This did get fixed over time to a good degree.  And as these things really settled down in recent years the stage was almost set, cross-browser compliance was close enough.  There were key missing ingredients however.

One of these missing ingredients was performance – you just couldn’t get close enough to native performance out of the apps that were developed for the web without security risks a mile wide AKA Active X.  These first generation JavaScript ajax apps were really far from the ideal mark – a huge step back in look and feel from native apps.  However, they were relatively safe and sand-boxed(after some time).  Most importantly collaboration was available in a larger way as application installation wasn’t an issue and a rate limiting factor to  deployment and acceptance.  So they were good enough and created a strange, ever-expanding environment of cross-breed of internet applications.  In our guts most developers realized these environments were first class kluges making a square peg round hole metaphor look like an insane understatement.  Html documents were never intended (or designed for) application development and this environment really create a freak show of clunky technologies.  But they became irresistibly de facto – their limited functionality was simply to useful to end users.

Things were getting very close…. 2.5 steps forward for the concept of write once run anywhere.  Everyone who was anyone wanted to have a web app built or customized for their company.  Over the course of several years, web-based app frameworks started to take hold – and sprout up everywhere.  The only thing standing in the way to a the universal “OS killer” app environment in browsers at this point was performance and a more unified, cohesive development experience (client side functionality instead of everything on the server, support for multiple programming languages, something closer to a symmetry between client side and server-side code, etc.).

Side Note: 

Many would think Flash might have helped bridge these gaps – but the binary executable blob concept simply never sat well with a generation of developers wined and dined on complete transparency of underlying code and implementation that came with the standard web development paradigm.  Flash was doomed well before Steve Jobs pulled a Microsoft and didn’t permit it on iOS devices.  Was he the benevolent guru of user experience as he claimed – keeping the masses from poorly performing flash apps?  Of course not, his Bill Gates spidey senses were at work – flash apps could be as strong as app-store apps and would be completely out from Apple’s thumb.  This is the same reason iOS devices don’t support WebGL – it makes uncontrolled high quality apps possible.  But Apple will cave in with iOS just as Microsoft did with WebGL in Internet Explorer – we’ll talk more about this later. 

HTML 5 was coming down the pipe and JavaScript engine optimizations were being implemented – and even rudimentary 3D using canvas and “software rendering” was coming along.  Things were getting so close you could almost smell it in the air.  And then the machines turned into a wrench (literally and figuratively)…….

Mobile phones came on the scene with Android and iOS and …… the old days were revisited – proprietary “OS apps” were back in full swing.  Let’s once again set back the clock and take a big step back…….

More coming in part 2!

Review of Apache Axis2 Web Services 2nd Edition by Deepal Jayasinghe & Afkham Azeez

I just recently had a chance to read Apache Axis2 Web Services 2nd Edition by Deepal Jayasinghe & Afkham Azeez.  I really didn’t know much about deployable java frameworks for web-service integration and this book quite effectively taught me a lot.  Axis2 seems to be a very strong robust framework for implementing web-service solutions, having learned a lot from its initial implementation Axis 1.  The book gives a great breakdown of the history of Apache SOAP, Axis 1 and Axis 2 including the motivations and reasons for each advancing technology.

The first few chapters do a great job of explaining how to install a distribution and give a look around its architecture including its XML and SOAP models.  It shows all the various ways to create and use AXIOM its Axis 2 XML model.  This is followed by chapters explaining the execution chain of the handlers and introducing the concept of a phase; which is a collection of handlers in a prescribed order.  The book continues to give a full understanding of the deployment model and all the various ways to deploy handlers showing both top down and bottom up approaches.  The Axis 2 information model is explained next as it is used in relation to service oriented architectures.

In the following chapters, a thorough explanation is given of how to implement Axis2 services and modules.  The remaining chapters focus on everything you might want to know about the client api, session management and clustering.

All in all Apache Axis2 Web Services 2nd Edition got me excited about setting up Axis2 on my own server (which I plan to do over the next couple of days) for integrating into my own custom geo-location and visualization applications.  I just recently became familiar with the books at Packt publishing – but they are quite rapidly becoming one of my preferred publishers and this book is another great addition to their offerings.

Review of OpenSceneGraph 3.0 Beginner’s Guide by Rui Wang & Xuelei Qian

When asked to review OpenSceneGraph 3.0: Beginner’s Guide I was excited to hear that a published book on OpenSceneGraph 3.0 was coming out.  OpenSceneGraph has grown into such a first class tool that it’s high time some books start hitting the shelves on how to use it.  The dearth of good published material reminds me a little bit of OpenGl back in 1998 – it was difficult to find a book on it anywhere outside of the classic red, blue and white books.

This book is called a beginners guide but has useful bits for anyone who seriously uses the library.  OpenSceneGraph is simply amazing; but getting up to speed and just compiling and building can be daunting for the first time user.  OpenSceneGraph 3.0 a Beginner’s Guide excels in covering all the details in setting things up in the first few chapters and brings out a number of important gotchas that can really cause you to spin your wheels if you miss them.

Chapters 1-3 quickly get you setup and running the sample programs in your build environment.  It’s obvious that the authors have a preference for Windows but the important Linux information is provided as well.  Chapters 4-9 get you up to speed on the basics of using scene graphs and of course OpenSceneGraph in particular.  It is quite comprehensive covering the basic principles of scene graphs and exploring all the various nuances that you may need to explore – stereo rending, multiple windows and viewports, etc.  This in addition to covering the core basics of models, animation, lighting, texturing etc.  Chapters 10-12 cover more advanced topics such as plugins, visual components and optimizing the rendering process.  A number of pop quizes throughout the book ask well thought out questions about each chapters topics.

At 385 pages I was highly impressed at the depth and scope of coverage.  The book does indeed deliver on it’s claims of being accessible for those brand new to OpenSceneGraph; although it does require a firm understanding of C++ of course  – it’s not a primer on that.  To be frank I almost find the title a bit deceiving – this book definitely should sit on the shelf of any developer of OpenSceneGraph at any skill level.

I look forward to hearing back from others who have read and used this book.  I would recommend a cover to cover read for any OpenSceneGraph user.

The publishers site for this book is available for your perusal here.

24 years of game programming: thrills, chills and spills: part 3 of 3

If you haven’t read the first and second parts of this article, you’ll probably want to check them out here and here.

So I was working with a company that did 3D development for games! It was 1998 and things were still fresh and new in that area. The three API’s of mention were Glide, Direct3d and OpenGL. I was working with some amazing artists and programmers and was glad to be learning more about the ins and outs of 3D development and tools. This was my first taste of a scene graph using Paradigm Entertainment’s Viskit. Scene Graphs are amazing things – and I was learning everything I could. Viskit was based on SGI’s Performer and like OpenGL – they had done something really, really right when crafting this API. Viskit sat effortlessly on top of OpenGL or Direct3d. I ended up working quite a bit on our Multigen loader to get all the bells and whistles supported. We used Max for our pre-rendered scenes but at this point Multigen creator was huge in developing real-time models. It was a great app that made both programmers and artists feel immediately at home. Scene Graph nodes could be mapped very easily to Multi-Gen nodes. It was a perfect match.

As usual my dedication ran high. I was able to entice a truly stellar programmer from my time in NJ to live with me a significant portion of the month so he could code on the team as well. I worked with two artists that two this day I have to say are second to none. We created an amazingly beautiful world with truly stunning environmental effects. It was enthralling. In the end though small companies with only one project are fraught with peril. In 2000 Aeon closed it’s doors and my love for Viskit took me to my next job at console developer Paradigm Entertainment. They had built a version of Viskit that ran on all the next-gen consoles including the X-Box, Game Cube, PS/2 and even the Dreamcast. It could still be baselined on the PC as well. This was my first time working with consoles – a fixed format machine, finally!
It was now 2000 and I found myself in Dallas, Texas, In typical game industry fashion several weeks after joining Paradigm they sold themselves to Infogrames. However, they still were able to call their own shots for my time at the company. It was awesome having 3 or 4 teams working with 3D technology for multiple consoles. This technology was developed by a dedicated core technology group – I hadn’t been lucky enough to work with a company big enough to have this luxury before. They were the think tank of the company always getting all the heavy pressures but coming up with some truly stunning innovations. All of our artists were using Maya – which worked well, it supported a group hierarchy for scenes like Multigen creator had previously.

Consoles are truly outstanding beasts to develop for. In all the years of working on the PC I had always dreamed of being able to develop for fixed platforms. You could know that once your testing was done it was good to go – no surprises based on machine configurations later in the pipeline. C++ was still the defining factor in-game programming, although plenty of PS/2 code was written in Microcode as well. Common acceptance of scripting languages for high level prototyping was still a few years off. The best part about working at Paradigm was access to resources. When teams ramped up they could have 8-10 artists and almost as many coders. Things could really come together fast.
2003 found me back in Virginia where I started working for ITSpatial – a company that did 3D representations of cities a la Google Earth … before Google Earth had been around. In fairness there was Keyhole, but what ITSpatial offered was a different kind of thing all together. ITSpatial built a product that provided data integration and fusion in an environment that supported 2D and 3D mapping. This was perfect for command and control applications, situational awareness and emergency training. It was really one big serious game and had many of the elements of a full-fledged game development effort and content production pipeline. The only thing difference was the price point of the product and the number of shipping units.

ITSpatial was strong and knew a lot about sales and business development. They were phenomenal sales people. I will always be amazed at how effortlessly they were able to get potential clients in the building, looking at demos – 3 or 4 times a week some weeks. They really were devoted to finding and wooing clients. This is a skill I’ve still got a long way to go on. I guess in some ways I’m still 100% a programmer at heart:)

In 2005 I broke away and this time started my own company Eureka 3D, Inc. I’m more of a hired gun/contractor with the luck to have access to a lot of other proven hired gun contractors. Eureka 3D has worked on entertainment 3D, GIS and I’ve really boned up on my web programming skills in the last two years. Deep down inside I’m still wanting to get back “into the game” and sink my teeth into that big 2 year development project. Right now my pipeline is very full and I really am enjoying what I’m doing – so no complaints!

Here we are in 2007. What have I learned from the game and serious game industry after all these years? So much… I’ve worked with top-notch programmers, others who couldn’t program their way out of a paper bag; I’ve worked with great artists, and a young up and coming project manager who was one of the sharpest guys I’ve ever met. I’ve worked with some truly genius biz-dev types that knew how to sell and pitch to clients. In 24 years every theory on project management for software/hardware/games has been debated, turned over and debated again much to my chagrin. I’ve seen every element of the business from pitching the contracts, building proof of concepts to final testing and delivery/deployment/distribution. I’ve learned a thing or two about politics and how important it is to avoid if at al possible. I’ve literally seen programmers in drop-down, drag out fights (visiting programmers from another company no less!). Religious wars over version control systems, graphics api’s, and taking on the latest new technology – been there, seen that.
The main thing I’ve learned however is that I thrive on the high-intensity, creative, ingenious environment that games and serious games involve. Finding a team that works well together, or a few truly brilliant individuals makes it all so worth while. I know I want to keep raising the bar and never lose the adventure. Because in the end it’s about the product, the team, and the experience. What an amazing 24 years it’s been!

The Top 10 Attributes of a Great Programmer

With all the latest attention again on what does and doesn’t make a good programmer, I couldn’t help but put together my own top 10 list.

  1. Being a great problem solver.
  2. Being driven and lazy at the same time.
  3. Ability to understand other people’s code
  4. Having a passion for programming
  5. Loving learning for the sake of learning
  6. Being good at math
  7. Having good communications skills
  8. Strong debating skills
  9. Extreme optimism
  10. Extreme pessimism

     

    1. Being a great problem solver – Hopefully everyone recognizes this one. Most good programming is all about being able to find solutions where others can’t see them. If you don’t have this skill the others matter far less.
    2. Being driven and lazy at the same time – This one surprises some people. Programmers question things and are often “too lazy” to take the long route. The will spend countless cycles trying to simplify the problem and simplify their task. That said they having a burning need to get the job done, they just want to do it as efficiently as possible.
    3. Ability to understand other people’s code – This point is essential but cuts some good programmers off from being great programmers. It doesn’t matter how well you can rewrite everything yourself – you need to be able to work with other people’s code on existing projects, lean on opensource in new projects, and learn good techniques from the code base that is already out there.
    4. Having a passion for programming – on some level you have to love programming for programming’s sake. I suppose to be truly great at anything you have to love it in most cases.
    5. Loving learning for the sake of learning – Programming is a moving target. Unless you love the art of edification you will sink fast. There are no laurels to rest on and no one cares what you did yesterday. Unless you are aware of the techniques on the horizon, you won’t be ready to embrace them when they become relevant.
    6. Being good at math – Different people will have different opinions here – but at the very least having a strong grip on pre-Calculus math. I’ve never seen a great programmer without a solid grasp of at the very least algebra and trig.
    7. Having good communications skills – This doesn’t mean that they can communicate with anyone and everyone. Specifically this means that they are able to clearly express their thoughts on their own terms. I’ve met plenty of great programmers who couldn’t communicate well with the world at large. However, given someone to talk to who understands the problem domain, they were all able to clearly state the problem and the solutions proposed.
    8. Strong debating skills – This follows the same logic as #7.
    9. Extreme optimism – Great programmers I have encountered have an insane certainty they can get the job done once they have chewed on it a bit.
    10. Extreme pessimism – Great programmers I have encountered have an insane insistence that when they lack the information needed to make a good judgment that they won’t be able to make one at all.
      After putting together this list some aspects surprised me, and I was the one who put together the list. So let me explain each in detail.These attributes describe those I’ve found in pretty much every great programmer I’ve come across. There were a number that fell through the cracks and I’ll explain those later. 

      Some of the things I instinctively wanted to put on the list but couldn’t say were true of at least 95% of great programmers include the following:

    1. Being extremely organized – Understanding when and where organization is important, yes. But anal attention to detail is something present in great programmers as often as it is people in other disciplines.
    2. Being good at managing other people and or programming projects – Somehow these skill sets are wonderfully synergistic when they sit side by side, but management and programming are often completely different disciplines.
    3. Being able to write good design documents – Same as #2. This skill may make some people better programmers and I am in favor of learning it. However, plenty of great programmers I have encountered couldn’t write a coherent design doc if their life depended on it. This will know doubt be debated by heavily by some.
    4. Having an ability to estimate time frames – Once again like #2. This is an acquired skill and a very useful one. However, I have seen absolutely zero correlation between great programmers and estimation skills.
    5. Prolific reading of tech books – I do this all the time myself, but many great programmers don’t. Let me be clear though – most programmers who aren’t all that hot could definitely benefit from bootstrapping their skills with some good reading.
    6. Ability to transfer their programming skills to any programming domain – Although many can, some great programmers can’t, or refuse to, grok other programming technologies. I like to think that this is a “refuse to” situation.
    7. Write code that is correct the first time around – Many great programmers commonly have syntactic issues flagged by compilers or at runtime interpretation. Some are zealots about the details the first time out, others are much more “extreme” in this area.
    8. Having other areas of great skills – some great programmers are good at only one thing – programming.
    9. Social or antisocial – Great programmers come in both forms.
    10. Are someone you’d want on you team – Unfortunately some of them just can’t work with others well.
    11.