Singularity University Executive Program

In March 2013 I had the opportunity and pleasure to attend the Singularity University Executive Program. It was a wonderful experience that I would recommend to anyone wanting to explore the impact of technology and the pace of technological change on our lives, our companies, and our world.

 The SU EP is a six day course in which you will experience a tremendous flood of information. But, if your like me, you seek out such things and face the firehose full-on. 

The tracks you study during the program are BioTechnology/BioInformatics, Energy / Environmental Systems, Networks / Computing Systems, AI / Robotics, Medicine / Neuroscience, and Nanotechnology. This gives you a wonderfully broad view with spot deep dives by subject matter experts that really drive home the ultimate point that exponential technology change is shaping the world around us at breakneck speeds relative to what we humans generally are comfortable with in some ways.  Over time, I hope to share my of the insights I gleaned from SU here and through my projects. For now, I recommend you have a look at the site they maintain called Singularity Hub to keep tabs on many of the related technologies and topics.

One of the best takeaways for me from the program was the friends I made while I was there and the many people I got to meet during the program. At the end of the day, no matter how technical things seem to get, never underestimate the power of human connections. 

Battling Process Entropy

Building good software requires a modicum of process to ensure quality and being on point for delivery. Too much process is bad. Too little process is bad. Poorly followed or atrophied process is potentially worse than either of those individually.

 I have observed again and again that no matter what processes, how heavy handed or not, are in place, there is a tendency toward entropy in the application of such processes. In the majority of software development lifecycle workflow patterns there are many patterns and they all suffer this same challenge daily as time passes.

Examples: 

  • Assigning Story Points / Effort Scoring in Scrum or Kanban flows - Estimating is  art more than science sometimes but deeply important.
  • Sprint Planning in a Scrum flow - People can get a little worn out sometimes with the rigor involved in full-one scrum/agile. However, these processes exist for a reason. Just one time style missing soon turns into project and technical debt that eventually has to be repaid.
  • Checklist processes - You spent the time to make the checklist. You know exactly how to use it but again and again I see them gathering dust. This is true of meeting checklists, launch checklists, etc.
  • Launch Rehearsals w/ adherence to NO GO if things don't go well. This can save so very much pain. People are people, we get tired, busy miss things. Running through the paces in staged rehearsals can combat this mightily. Skipping this process generally leads to lots of lost sleep and bad feelings.
  • Testing Falling by the way side - We've got loads of tests. You don't always need them all. There's functional, QA, unit, load, etc. However, just skipping testing is a true recipe for disaster when it's truly go time.

and more... 

In summary, as noted, too much process is simply not good. But, nothing is just a different kind of mess. Letting processes you spent perfectly good time and effort to create and learn atrophy is a bit of a crime (as would be doing them blindly of course). It's up to the active team to lay in the right processes, in the right amount, at the right time to make sure that a quality, timely, cost controlled, high estimate tolerance outcome is achieved. Ironically, NOT trying to plan every tiny little detail up front is one of the real keys to all of this but it only works when there is room to move. Lastly, everything is always in motion. Just like code, processes need to be refactored over time. 

 

140+ v2013.05.15

The field of Deep Learning has something that just rings true. http://bit.ly/139i756 very exciting field.

Deep Learning is a truly exciting area in the field of computer science and mathematics. I was initially brought into this way of thinking when I ran into a number of Jeff Hawkin's lectures on YouTube then purchased and read the book "On Intelligence" by Hawkins. Since I've been learning and exploring various iterations of the concept of deep learning as embodied in projects like Google Now, IBM, and many others. I think the possibilities are nearly endless with this technology. 

The inevitable question of human level AI always comes up when discussing deep learning but, in fact, I have little desire at the moment for a robot best friend with or without benefits. ​But, what I would like to see is human augmentation beyond what we have already today.

"Traditional IT Department No Longer Tenable" How #CloudComputing Changes Enterprise IT Economics http://bit.ly/13kmsG3 

I started blogging about Cloud Computing right here on this blog in 2007/8. My first posts were looking around to see what people wanted and were trying to do and teasing out what seems obvious now, the differences between SaaS, PaaS, and IaaS. Shortly after, I started a company called nScaled to actually build and use clouds. In the interim I've helped dozens of companies build private clouds, public clouds, hybrid clouds, and lots more.

This particular tweet caught my attention because, having done all that and then seeing this particular post I just thought that ​what's not tenable is IT as an island. IT is simply part of any business now. It's deeply integrated and one way or another, it's all about the cloud no matter which type you want to build. If you don't go that way, your business will not be able to compete.

Google I/O 2013 Extended - en la Universidad de Lima :)http://fb.me/RJEuL72H 

I just spent a month in Lima, Peru ​living with my family and working. While there I had the great opportunity to meet many local entrepreneurs and even visit one of the coolest startup incubators in South America called Wayra Peru which is funded by Telefonica. Things are moving fast and growing quickly in the tropical zone and I am super-excited by what I found while I was there. 

Analysts Report that Cloud-Based Adoption Increased 40 Percent this Year for Supply Chain Software http://bit.ly/13kmnlO 

​This one just goes to show you how deep it's getting in cloud computing. These are died in the wool you gotta have it and tied into the heart of the value chain for big manufacturing companies adopting cloud computing solutions at dizzying rates. How cool is that? To all those cloud haters from way back in 2007. I poke you in the eye today. There is no going back now.

About 140+​

​140+ is my periodic effort to expound further on 3-5 of my recent tweets because sometimes, 140 characters just isn't enough.

What is to Show for Five Years of Cloud Computing?

Just a few short years ago launching a virtual machine in the cloud was a simple and basic. With a couple of API calls and maybe a button click or two you were up and running in a just a few minutes. The choices were limited but it was nice. You could even get a little storage to go along with the instance. Just before that we were leasing, renting and co-locating dedicated physical hardware in data centers and it took weeks to order, provision, deploy, and set up the gear. Fast forward to today and we are now full-on in a cloud computing revolution redefining how technology is deployed. There are so many choices and so many of them good that it can be complete overwhelming to those trying to make sense of it all. On top of it all, every day I meet people who have never deployed anything in “the cloud.” It’s just as easy as ever to launch a machine but there is so much more available to the on-demand computing as a service would be client today. I was trying to think of what really is different or better today than it was five years ago. What’s really new to show for 5+ years of cloud computing innovation and effort?

First and foremost, people figured out that cloud computing is good for something really important (meaning people not Google). They figured out that the cloud in its various forms is phenomenal for capturing and processing what has come to be known as big data. This is a really important point. It’s never been easy, and still isn’t, to aggregate and process voluminous, high speed, or wildly unstructured data. In fact, prior to cloud computing coming of age it was down right impossible fiscally and technically. Now, it’s all there at the click of a few buttons as pretty as you like. You can now spin up a super computer for just a few dollars an hour to crunch even your most gnarly data sets.

A second fairly dramatic improvement is in the category of orchestration of resources. There are far more resources available to orchestrate for an infinite number of purposes but doing so has never been easier (not, I did not say easy). Due to the proliferation in understanding of the creation and consumption of API’s you can now quite literally with a single set of tools launch a server at several different cloud providers, geo locations, and even operating system varieties if so you wanted and if you’re clever with tools like Puppet, Chef, Cloud Formation, Cloud Foundry or others you can do it all from the comfort of your very own laptop in just a few minutes. You can quickly and relatively easily, historically speaking, compose masses of servers into useful services for nearly anything you can dream!

A third thing that’s changed is the raw power available via a command line or cloud console and in the newer implementations of older software architectures. You can now, in just a few moments, provision a server with 244 GiB memory and high speed 10 Gigabit Ethernet. And, that is just a building block to the real power. The real power comes as a result of massive improvements and capabilities in the arena of distributed computation, storage, and software defined networking. This allows you to provision dozens to thousands of these types of machines relatively on a whim. Frankly, not many people can even figure out what to do with all this power even if they do know how to provision it today. This has forced software architects and engineers to push forward much faster with zeal and learn how to write distributed applications and in many cases, the occasion is being met. So, raw power in both virtualized hardware and the software that can be deployed on it has come a very long way.

In summary, cloud computing had already been brewing for decades with its roots reaching far back in time. Grids, clusters and more were all precursors. However, it is striking how far things have come in just about five years. There has been unprecedented improvement and what feels like ever increasing speed of improvements. Good times indeed.