Living on the Edge!

IMG_20180417_084345_392

So for those who haven’t noticed the LinkedIn profile update, I’m now fortunate enough to join the awesome folks at Vapor IO.  I’m super excited to be part of this team as we push forward to deliver next generation hardware, software, and service technologies needed for the low latency demand of 5G communications, including IoT, virtual and augmented reality, smart cities, and connected cars.  If you’re interested in the details of what we’re doing, check out the work we’re doing with Project Volutus….and if you wanna join me in this adventure, I AM HIRING!!!  And if you don’t see a fit, shoot me an email (robbie@vapor.io), because I’m always open to hiring great engineering talent. when possible.

Advertisements

robbie.williamson@canonical reaches End of Life on April 30, 2018

Ubuntu announced its new Foundations team manager over 9 years ago, on September  29, 2008 an internal email went out with the pre-announce:

Hi folks,

I'm pleased to be able to let you know that we've signed a new manager
for the Foundations team. His name is Robbie Williamson, currently at
IBM, and he'll be joining us from the 20th of October. Please keep this
quiet for now (i.e. it's not for dissemination on IRC channels or
mailing lists yet; Matt will be preparing an announcement for
distro-team@ in due course), as he has yet to inform the team he's
currently managing! However, he said he was OK with me letting you guys
know in advance now.

I've talked with a couple of you about this already, but I believe that
we're provisionally planning to put most of the previously-discussed
reporting changes into effect at the same time. Thus:

20 October:
 Colin: Foundations manager -> Foundations tech lead
 Alexander, Arne, Bryce, Chris: Foundations -> Desktop
 Michael: Desktop -> Foundations

When a new desktop manager is hired:
 Scott: Desktop manager -> Foundations

Any objections, please let me know!

Cheers,




-- 
Colin Watson [cjwatson@canonical.com]

The support period for this “release” is now nearing its end and robbie.williamson@canonical.com will reach end of life on today, April 30th. At that time, Canonical will no longer include information or updated packages for Robbie Williamson.

The supported upgrade path from robbie.williamson@canonical.com is via robbie@ubuntu.com. The robbie@ubuntu.com email address will continue to be actively supported with reads and select high-impact responses. All announcements of official updates for Robbie Williamson are also sent to Twitter and LinkedIn, information about which may be found at:

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customize or alter their software in order to meet their needs.  I am honored to have played a role in helping to make Ubuntu what it is today,  extremely grateful for the opportunity given to me to do so, and infinitely proud to have worked with so many amazingly bright and wonderful people inside Canonical and the Ubuntu community as a whole.

As for what’s next….stay tuned to this channel 😉

“Ubuhulk” signing off!

UDS Precise Pangolin

 

Robert Williamson…You. Are. An Ironman!

finishercertificate

It’s been over 3 years since my last blog post.  I haven’t been hiding or anything, it’s just that with updates across Twitter, Facebook, LinkedIn, Instagram, and Snapchat…I never felt the need to post beyond the friends and followers I have there.  However, this past weekend changed that.

As you can tell by the title and the picture above, I completed my first full 140.6 mile IronMan race this past Saturday in Cambridge, Maryland.  For those who don’t know me well, or just haven’t been keeping up on the various social media streams, I started doing triathlons about three years ago.  I never had a life goal to do an IronMan…or a Half IronMan…or even a triathlon.  I knew I could do a triathlon, but wasn’t sure if I’d like doing it…but I tried CrossFit…liked that…tried a half marathon…didn’t die…so f*ck it, I figured I’d keep going outside of my comfort zone.

My first triathlon was in May of 2015…the annual Rookie Tri in Austin, TX.  It’s a perfect triathlon for newbies, as the distances are short: 300 meter swim, then an 11 mile bike, ending with a 2 mile run.  I finished in at time of  1:13:05…looking good in all my Hulk splendor, lol.

20150503_10161_X124

Fast forward two years of more CrossFit…more triathlons…more training, and I did my first IronMan 70.3 mile distance race in Austin, Texas last October.  I finished with a time of 6:12:48.

70.3finishercertificate

With the swim getting cancelled, a delayed bike start, and record high temperatures that day, the race didn’t go as well as I had hoped…but I did finish, which is all I really hoped to do.  I didn’t have any significant injuries post race, but I was hurting at the end for sure.  I didn’t feel recovered physically (i.e. able to workout) for probably 4 or 5 days afterwards.

3_m-100743952-DIGITAL_HIGHRES-1602_005652-5022200

Despite all of that, I had already pushed myself further…and signed up for the 2017 IronMan Maryland race about 3 weeks before.  I decided to sign up before my IronMan 70.3 to avoid any fatigue/pain induced hesitation or second thoughts post race.  To be honest, I may not have done it if I waited, so looking back now…glad I did.  I chose Maryland because the bike and run course was flat, and the time of year would ensure a wetsuit swim (helps with buoyancy and improves my speed) and a relatively cool ride and run.  It was the perfect race for me…at least from my initial research.  Later on…after registering, waiving my partial refund option, and finishing my 70.3…I would learn that this would only be the 3rd year for the race.  To make things worse, the previous two years had severe weather issues that caused rescheduling and shortening of the course.  Given I had my swim cancelled in my 70.3 race (thus really being a 69.1), you can imagine how upsetting this was…but I was in it now, so all I could do is train and hope for the best.

Fast forward to April 2017.

For my IronMan 70.3, I used an online training application called Training Peaks.  The IronMan organization has coaches that post 6 month training plans that you can pay for and plug right into the application.  It will then populate a calendar with every run, bike, swim, and strength workout you need to do until race day.  The plans are adjustable based on your progression and you can slide some workouts around to fit your schedule.  Given it worked pretty well for my 70.3, I decided to use it again for my full 140.6 race training.  I started 7 months out to allow slack for my annual vacation week with my sons, as well as those days when work travel or life commitments wouldn’t allow for training.

Screenshot from 2017-10-09 22-05-41

Now for those of you who have never done an IronMan, the race itself is the relatively easy part…training is the hard part.  Most people think you’re just swimming, biking, and running all the time…and you are, but you’re also figuring out how many calories you need a day to keep that up, what nutrition works with your body for the long rides and runs, how you will maintain hydration and electrolyte levels without having to pee/puke/poop all the time, the right settings for your bike to avoid unnecessary pain, discomfort, and loss of pedal power, the right clothing to wear, breathing and stroke techniques for swim, and how to efficiently and effectively recover.  Your life becomes programmed for 6-7 months..and this is on top of the day-to-day demands of your “normal” life.  I was living the IronMan “dream”.  Spending hours and hours training, trying to maintain some level of strength with scaled back CrossFit workouts when I could, working/traveling as a full-time executive, putting in full effort as a part-time single Dad of the best two boys in the world, and attempting to have a social life when I wasn’t exhausted and just wanted to sleep.  Smartly, I only signed up for 3 triathlon races this year, each spaced appropriately apart with building distances because I knew that’s about all I could handle.

By the start of July…I was burning out.  Everything mentioned before, on top of other stress in my personal life was becoming too much…I wasn’t enjoying myself much…but still trying to “fake it, until I made it”.  My training was done mostly alone, because my work/parenting demands leave for little slack in my free time.  Training groups are awesome and I highly recommend them, but if you are gonna train in a group or with friends, you need to be somewhat flexible to accommodate everyone’s schedule…and I just didn’t have that.  So take that, add me having to give up weekend basketball to reduce the likelihood of injuries, miss a lot of CrossFit workouts with friends due to fatigue or lack of time, plus the fact that I’m a single Dad who works from home in an empty house a lot of the time….and you can see how the alone time can start to wear on you.  While I knew the alone time training was good from a standpoint of race preparation mentally (it’s mostly just you out there until the run), I also knew I needed to free up more time or my training (and my life) would start to suffer from lack of motivation.

So, I started searching for IronMan training programs that included strength as a core part and would allow me to have more free time.  I figured just like High Intensity Interval Training (HIIT) builds strength without hours in the gym, there had to be a better way to build endurance without so many high volume workouts.  I recalled there being a CrossFit Endurance program a couple years ago, but it had gone away.  I did some digging on the person who created it, Brian MacKenzie, and ended up coming across his new program called Power Speed Endurance (PSE).  This was created exactly to meet my needs…it was literally an answer to my prayers.  People can read about it on their own, but in short the focus is more on proper technique and intensity of the triathlon workout, versus the volume.  It includes sport specific strength and HIIT cardio programming, along with a strong focus on the proper approach to mobility and breathing to aid in recovery.  Needless to say that after reading through the theory, results, and sample programming, I immediately switched over.  I also setup a 1-on-1 with the triathlon coach, Jeff Ford, who is THE BOMB!  Now to be clear, having any coach, even remote, is a huge plus because they can adjust programming on the fly for your situation and are there to answer questions or concerns you may have.  Jeff put together a 12 week plan for me based on my needs, ability, experience, and expectations….which I was lucky to have in place EXACTLY 12 weeks out from my race.  It essentially augmented the normal PSE triathlete program to include customized workouts on the weekend, notes on nutrition, and and how much volume to add towards the race.  However, all of this was accomplished without going into the extremely long, exhausting workouts of the traditional IronMan programming.  For example, the longest I ever rode my bike in my 12 week lead up was 50miles, longest run was 13.1 miles, and longest swim 2000 meters.  Traditional programming has you riding 100 miles 3 weeks out from the race…no thank you.  I began seeing results after the first few weeks.  My workouts were more interesting with the strength and HIIT conditioning, I was getting measurably stronger, and my performance in my swims, bikes, and runs were improving much better.  The best part was that this was all occurring while leaving me more time and energy to live my life.  I think if you wanted to compete at the pro level, then you’d need to do more of the traditional style…but if you’re just trying to be a good age-group triathlete, not lose a sh*t load of muscle, and feel somewhat “normal” after the race is done, I strongly suggest checking this approach out.

21640832_621092908232981_4225208177417159213_o

My final triathlon before my IronMan was the Onalaska 70.299 near in Onalaska, Texas near Lake Livingston on September 10, 2017…about 4 weeks out from the race.  No idea why they say 70.299, but I suspect there is some trademark related issue with IronMan because the distances added up to 70.3 miles.  Anyway, I was a bit apprehensive about doing it because I had to travel out there alone, didn’t know anyone in the race, didn’t know anyone who had done the race, and it would be the longest race I had ever done…assuming the swim wasn’t cancelled again.  From the moment I got there, things started going a little “off plan”, but I ended up finishing with a time of 8:11:21…at 73.2 miles.  I’ll spare you the details (most of my friends already know them), but every leg of the race had chaos, e.g. missing porta-potties, choppy waves, swim buoys moving, a lovebug infestation, wrong distances, and apparently alligators had been seen in the lake.  Needless to say that I was not very happy after the race…at all.  I had swam, biked, and ran slower than planned, had stomach issues during the run, and then had to load up all my stuff by myself, including my bike, and drive 3 hours back home right after.  For the first hour of the drive back, I was honestly rattled…my performance was worse than from a year before, I was only 4 weeks out from doing a race twice as long, and if I didn’t improve I could hit the 17 hour time limit and not finish…not get a medal…not be an IronMan.

After an hour of feeling sorry for myself, I decided that I needed to use this experience to get better…not bitter.  I focused on what I could learn from the race and any positive takeaways…and there were some good ones.  I knew I could handle a rough water swim now, I knew I could handle surprises on the course, I knew I could handle stomach issues, and I knew that I was strong-willed and focused enough to finish a race even when the race director officially ended it (too early) and I saw other people taking short-cuts on the run.  The best part was that I knew I was in better shape because the next day I was able to walk into my CrossFit gym and hit a personal best on a 3 rep max back squat….I could barely walk the day after my IronMan 70.3 Austin a year before.  So in the end, Onalaska was a blessing……buuuut I’ll never do it again. LOL

Screenshot from 2017-10-09 22-19-24

After Onalaska, I had four weeks left…that included three business trips (one international), a multitude of Dad commitments for school, sports, etc, a 13.1 mile during Hurricane Harvey (wasn’t that bad in Austin, just non-stop rain and wind gusts), and me missing my flight out of Austin Thursday morning because I forgot to update my calendar after United had moved the flight over a month before (ugh)….but I made it through all of that (and more) to arrive in Cambridge, Maryland on Thursday, October 5th.  Then about 5min after I arrive, my good friend and fellow competitor in the race, Brent Baker, sends me these.

Yep…that’s right folks…jellyfish in the water.  We had heard that there were jellyfish in this lake, but that they clear out by October…but apparently that’s late October.  I have never been stung by a jellyfish, so I had no idea how bad it would feel…if I was allergic…if my throat would swell up during the swim and I’d drown!  However, after Onalaska…after the 6 months of training…and all the money spent to get here, I decided to suck it up and swim.  I figured that between having a full-body wetsuit and the over 1000 other swimmers in the water…I had a solid chance of getting through it pain free.

The next day was spent picking up registration and swag bag stuff, checking in gear bags and my bike, attending the athlete briefing, checking out the finisher medal and finish line near IronMan village….and trying to convince myself that I had nothing to worry about with those damn jellyfish.

That evening, I had a good dinner with my friends who were also competing, along with my Mom and their families.  Later that night a couple more friends would arrive, one unexpectedly, to be ready to cheer me on the next day.

Race day arrived.  I had gotten a decent amount of sleep the night before, which was nice, and had my usual pre-training day protein bar and amino drink in the morning. I was fresh and feeling great!  We all headed out to the race…full of nervous energy and excitement.  I think my family and friends were more nervous and excited for me than I was, but I’m also pretty good at staying calm before the storm, so to speak. Plus, that morning I had made the decision to enjoy the day.  No more worrying about stupid jellyfish, or flats/wrecks/mechanical issues on the bike, or stomach issues and cramps on the run, or the weather, or whatever.  I had put in a ton of work for this and to finally get here and obsessively worry would be a waste.  So when that alarm went off, I sent a short message to the Man upstairs to keep me safe so I could return to my family and friends, had a chat with my Dad in heaven, and then put 100% of my energy into making the most of the day with the people I love.

The 2.4 mile swim went extremely well for me.  I’m already a pretty strong/fast swimmer, and with the wetsuit, I knew that I’d be even better today……buuuut we did have jellyfish.  Lucky for me, I had zero stings…can’t say the same for my friends, but no one was even remotely hurt during the swim.  Perhaps it was the last minute jellyfish repellent cream a fellow athlete gave me, or that I pee’d in my wetsuit half way through the first of two loops (don’t judge me…everyone does it), but I came out unstung.  There was a strong current, but it was mostly pushing us from behind, and I ended up coming out with the fastest long distance swim (i.e. 1 mile or more) I’ve ever had….good start to a good day.

finisherpix_1834_019489

After rinsing off the salt water (that’s why my face looks like that) and getting the wetsuit stripped off, I picked up my bike transition bag and headed for the men’s changing tent.  IronMan is the only triathlon (that I know of) that provides tents for changing between transitions.  Every other race (including the 70.3) requires you to transition right by your bike…so there’s only so much you can change.  Inside the tent, there are naked men everywhere…putting on creams, tights, sunscreen, shoes, etc.  There’s also some fluids and nutrition, along with people to help manage the chaos.

After changing, I headed out of the tent and towards my bike for the 112 mile ride.  This would be the longest I had ever ridden before…previous distance being only 70 miles.  I wasn’t worried…I trusted the training…but nevertheless, I knew I was going to set a personal best on the bike today!  The bike course was a 6 mile ride out, to a 50 mile loop, that we did twice before riding back 6 more miles to transition.  I had wanted to average around 17.5mph or more to finish in just over 6 hours.  The course was very flat, so I was confident I could do that or even better.  I figured I would be passed by one of my friends, Justin, but I expected that…he’s good on the bike.  After the first loop, he had passed me, but I was feeling great…riding at over 18mph average, had no mechanical or intestinal issues…saw my other friends cheering me on…life was good.  Then the wind picked up.  The first half of the second loop was either a direct headwind or vicious crosswind, and at my size and with my aerowheels, it is no fun at all.  You want to sit upright to have more power in the pedal, but doing so turns you into a human wind sail.  At one point I was barely going 11mph…I would have traded all that wind for hills any day.  Around mile 80, I caught my second wind though (no pun intended) and I was able to adjust my position on my bike to allow me to use more of my hamstrings and butt to pedal…much stronger there.  I finished out the ride strong, but was passed by another friend, Brent, with 30 min left in the ride, and even though I was still committed to enjoying the day….I was definitely annoyed with that.  I finished my bike in a time of 6:40:34…about 30 minutes slower than I thought I would coming out of the first loop, but ironically on the 17mph pace I originally wanted…so all in all, not a bad ride in the end.

finisherpix_1834_070099

Next up…a marathon. This would be another personal best, as the longest I had ever run before…in my life..was just over 13.5 miles.  Again, I wasn’t worried…I trusted the training…and by then I knew I had enough time to walk the entire thing if I had to.  I went into transition, thanking God I was off that bike and feeling amazingly well in my legs.  My plan was simple, yet effective.  Run a “comfortable” pace under 13min/mile (under 12min if I felt good), and walk for 1-2min through every aid station.  The aid stations were set about a mile a part, so I was essentially running 26.2 x 1 mile repeats.  Breaking it up this way, not only helped me physically, but also mentally.  For every mile, except for probably the last three and two somewhere in the middle when I needed to use the porta-potty (best feeling EVER after those two stops), I was able to stay on plan and run the full distance between aid stations…no walking…with a comfortable, yet strong stride…1-2 min walk with a smile on my face at every aid station, thanking every volunteer….seeing my insanely awesome friends and Mom cheering me on…and genuinely enjoying the experience.  I remember looking up at the moon towards the end and thinking how pretty it was on the horizon, and noticing that I wasn’t in any unanticipated discomfort or pain.  I was tired for sure…legs a little heavy…but no blisters, chaffing, swelling, or joint pain throughout the entire run.  I’ve never been a huge fan of running…often saying that if it were at the beginning or middle of a triathlon, I probably would have never done them…but I put a lot of work into learning how to run long distances properly and it had finally paid off.  I planned on finishing my run in 6 hours…I finished in a time of 6:01:40.

finisherpix_1834_043792

I completed my first IronMan race in a time of 14:37:38.  I wanted under 14 hours, and had I been a little faster in transition and better on the second bike loop, I would have made it….but I’m still pretty proud of myself.

In closing I’d like to thank all the friends, families, coaches, and institutions that helped me along the way.  I cannot begin to name them all, but special shout outs go to Woodward CrossFit, AJ’s CycleryNorthwest YMCA, Pure Austin Quarry Lake, and Barton Springs Pool…it wouldn’t have happened without having access to these places.  As for people, I have to give thanks to my two sons, Kalen and Bryce, for behaving and patiently waiting at home while Daddy went for long rides, runs, and swims on weekends.  I thank all my friends and family who tolerated my countless Facebook, Instagram, and Snapchat postings of workouts.  However, I have to call out the three other crazies I convinced to do this race with me: Brent Baker, Andrea Baker (another first timer!), and Justin Fosbury…you all made it a blast!

22195721_1870056996354287_1876229950387823532_n

And last, but definitely not least, I have nothing but the deepest of love and greatest of gratitude to my Mom, Linda Williamson, along with my friends Pamela Gagot, Michael Strauss, and Lisel Kraus for coming all the way out to where “Boyz in da hood meets the South” and supporting me all day and night. I love you guys.

22281816_1725679237474617_4882831516248637287_n

Looking back at the entire thing…from my decision to do it, to the start of training, to the finish line…it was truly a life changing event.  The changes you go through physically, mentally, and spiritually throughout training cannot be put into words.  You end up learning that most of the limits you have in life are self-imposed…that you can do a lot more than you’re capable of…handle a lot more than you ever thought you could.  At times, the training was the only constant in my life…almost a mediation period, when I could focus on a single movement or workout to block out everything else.  The past twelve months of my life have been intense…including the unexpected passing of my father, some radical changes in my job, and other unexpected and/or stressfull changes in my personal/family life.  All in all…I have to say that this IronMan race may have been a blessing in disguise.

Now it’s time for another tattoo!!!!!

 

Priorities & Perseverance

Screenshot from 2014-09-17 22:48:22

This is a not a stock ticker, rather a health ticker…and unlike with a stock price, a downward trend is good.  Over the last 3 years or so, I’ve been on a personal mission of improving my health.  As you can see it wasn’t perfect, but I managed to lose a good amount of weight.

So why did I do it…what was the motivation…it’s easy, I decided in 2011 that I needed to put me first.   This was me from 2009

Screenshot from 2014-09-10 09:54:50IMG_84318618356313

At my biggest, I was pushing 270lbs.  I was so busy trying to do for others, be it work, family, or friends, I was constantly putting my needs last, i.e. exercise and healthy eating.  You see, I actually like to exercise and healthy eating isn’t a hard thing for me, but when you start putting those things last on your priorities, it becomes easy to justify skipping the exercise or grabbing junk food because your short on time or exhausted from being the “hero”.

Now I have battled weight issues most of my life.  Given how I looked as a baby, this shouldn’t come as a surprise. LOL

20140917_231831

But I did thin out as a child.

530336_10151620134146242_1946930333_n

To only get bigger again

20140917_231901

And even bigger again

20140917_232423

But then I got lucky.  My metabolism kicked into high gear around 20, and I grew about 5 inches and since I was playing a ton of basketball daily, I ate anything I wanted and still stayed skinny

10475956_10152202484326242_6086082912878589217_o

I remained so up until I had my first child, then the pounds began to come on.  Many parents will tell you that the first time is always more than you expected, so it’s not surprising with sleep deprivation and stress, you gain weight.  To make it even more fun, I had decide to start a new job and buy a new house a few years later, when my second child came…even more “fun”.

2014-08-24 22.07.43

To be clear, I’m not blaming any of my weight gain on these events, however they became easy crutches to justify putting myself last.  And here’s the crazy part, by doing all this, I actually ended up doing less for those I cared about in the long run, because I was physically exhausted, mentally fatigued, and emotionally spent a lot of the time.

So, around October of 2012 I made a decision.  In order for me to be the man I wanted to be for my family, friends, and even colleagues, I had to put myself first.  While it sounds selfish, it’s the complete opposite.  In order to be the best I could be for others, I realized I had to get myself together first.  For those of you who followed me on Facebook then, you already know what it took…a combination of MyFitnessPal calorie tracking and a little known workout program called Insanity:

Insanity-Workout

Me and my boy, Shaun T, worked out religiously…everyday…sometimes mornings…sometimes afternoons…sometimes evenings.  I carried him with me all for work travel on my laptop and phone…doing Insanity videos in hotels rooms around the world.  I did the 60day program about 4 times through (with breaks in between cycles)…adding in some weight workouts towards the end.  The results were great, as you can see in the first graphic starting around October 2012.  By staying focused and consistent, I dropped from about 255lbs to 226lbs at my lowest in July 2013.  I got rid of a lot of XXL shirts and 42in waist pants/shorts, and got to a point where I didn’t always feel the need to swim with a shirt on….if ya know what I mean ;-).  So August rolled around, and while I was feeling good about myself…didn’t feel great, because I knew that while I was lighter, and healthier, I wasn’t necessarily that much stronger.  I knew that if I wanted to really be healthy and keep this weight off, I’d need more muscle mass…plus I’d look better too :-P.

So the Crossfit journey began.

Now I’ll be honest, it wasn’t my first thought.  I had read all the horror stories about injuries and seen some of the cult-like stuff about it.  However, a good friend of mine from college was a coach, and pretty much called me out on it…she was right…I was judging something based on others opinions and not my own (which is WAY outta character for me).  So…I went to my first Crossfit event…the Women’s Throwdown in Austin, TX (where I live) held by Woodward Crossfit in July of 2013.  It was pretty awesome….it wasn’t full of muscle heads yelling at each other or insane paleo eating nut jobs trying to out shine another…it was just hardworking athletes pushing themselves as hard as they could…for a great cause (it’s a charity event)…and having a lot of fun.  I planned to only stay for a little bit, but ended up staying the whole damn day! Long story, short…I joined Woodward Crossfit a few weeks after (the delay was because I was determined to complete my last Insanity round, plus I had to go on a business trip), which was around the week of my birthday (Aug 22).

download

1381407_609309165778302_680124169_n

Fast forward a little over a year, with a recently added 21-day Fitness Challenge by David King (who also goes to the same gym), and as of today I’m down about 43lbs (212), with a huge reduction in body fat percentage.  I don’t have the starting or current percentage, but let’s just say all 43lbs lost was fat, and I’ve gained a good amount of muscle in the last year as well…which is why the line flattened a bit before I kicked it up another notch with the 21-Day last month.

Now I’m not posting any more pictures, because that’s not the point of this post (but trust me…I look goooood :P).  My purpose is exactly what the subject says, priorities & perseverance.  What are you prioritizing in your life?  Are you putting too many people’s needs ahead of your own?  Are you happy as a result?  If you were like me, I already know the answer…but you don’t have to stay this way.  You only get one chance at this life, so make the most out of it.  Make the choice to put your happiness first, and I don’t mean selfishly…that’s called pleasure.  You’re happier when your loved ones are doing well and happy…you’re happier when you have friends who like you and that you can depend on….you’re happier when you kick ass at work…you’re happier when you kill it on the basketball court (or whatever activity you like).  Make the decision to be happy, set your goals, then perservere until you attain them…you will stumble along the way…and there will be those around you who either purposely or unknowingly discourage you, but stay focused…it’s not their life…it’s yours.  And when it gets really hard…just remember the wise words of Stuart Smalley:

Canonical’s Office of The CDO: A 5 Year Journey in DevOps

I’m often asked what being the Vice President of Cloud Development and Operations means, when introduced for a talk or meeting, or when someone happens to run by my LinkedIn profile or business card.

The office of the CDO has been around in Canonical for so long, I forget that the approach we’ve taken to IT and development is either foreign or relatively new to a lot of IT organizations, especially in the commonly thought of “enterprise” space. I was reminded of this when I gave a presentation at an OpenStack Developer Summit entitled “OpenStack in Production: The Good, the Bad, & the Ugly” a year ago in Portland, Oregon. Many in the audience were surprised by the fact that Canonical not only uses OpenStack in production, but uses our own tools, Juju and MAAS, created to manage these cloud deployments. Furthermore, some attendees were floored by how our IT and engineering teams actually worked well together to leverage these deployments in our production deployment of globally accessible and extensively used services.

Before going into what the CDO is today, I want to briefly cover how it came to be. The story of the CDO goes back to 2009, when our CEO, Jane Silber, and Founder, Mark Shuttleworth, were trying to figure out how our IT operations team and web services teams could work better…smarter together. At the same time our engineering teams had been experimenting with cloud technologies for about a year, going so far as to provide the ability to deploy a private cloud in our 9.04 release of Ubuntu Server.

Ubuntu_Enterprise_Cloud

It was clear to us then, that cloud computing would revolutionize the way in which IT departments and developers interact and deploy solutions, and if we were going to be serious players in this new ecosystem, we’d need to understand it at the core. The first step to streamlining our development and operations activities was to merge our IT team, who provided all global IT services to both Canonical and the Ubuntu community, with our Launchpad team, who developed, maintained, and serviced Launchpad.net, the core infrastructure for hosting and building Ubuntu. We then added our Online Services team, who drove our Ubuntu One related services, and this new organization was called Core DevOps…thus the CDO was born.

Roughly soon after the formation of the CDO, I was transitioning between roles within Canonical, going from acting CTO to Release Manager (10.10 on 10.10.10..perfection! 🙂 ), then landing in as our new manager for the Ubuntu Server and Security teams. Our server engineering efforts continued to become more and more focused on cloud, and we had also began working on a small, yet potentially revolutionary, internal project called Ensemble, which was focused on solving the operational challenges system administrators, solution architects, and developers would face in the cloud, when one went from managing 100s of machines and associated services to 1000s.

All of this led to a pivotal engineering meeting in Cape Town, South Africa early 2011, where management and technical leaders representing all parts of the CDO and Ubuntu Server engineering met with Mark Shuttleworth, along with the small team working on Project Ensemble, to determine the direction Canonical would take with our server product.

IMG_0439

Until this moment in time, while we had been dabbling in cloud computing technologies with projects like our own cloud-init and the Amazon EC2 AMI Locator, Ubuntu Server was still playing second stage to Ubuntu for the desktop. While being derived from Debian (the world’s most widely deployed and dependable Linux web hosting server OS), certainly gave us credibility as a server OS, the truth was that most people thought of desktops when you mentioned Ubuntu the OS. Canonical’s engineering investments were still primarily client focused, and Ubuntu Server was nothing much more than new Debian releases at a predictable cadence, with a bit of cloud technology thrown in to test the waters. But this weeklong engineering sprint was where it all changed. After hours and hours of technical debates, presentations, demonstrations, and meetings, there were two major decisions made that week that would catapult Canonical and Ubuntu Server to the forefront of cloud computing as an operating system.

The first decision made was OpenStack was the way forward. The project was still in its early days, but it had already peaked many of our engineers’ interest, not only because it was being led by friends of Ubuntu and former colleagues of Canonical, Rick Clark, Thierry Carrez, and Soren Hansen, but the development methods, project organization, and community were derived from Ubuntu, and thus it was something we knew had potential to grow and sustain itself as an opensource project. While we still had to do our due diligence on the code, and discuss the decision at UDS, it was clear to many then that we’d inevitably go that direction.

The second decision made was that Project Ensemble would be our main technical contribution to cloud computing, and more importantly, the key differentiator we needed to break through as the operating system for the cloud. While many in our industry were still focused on scale-up, legacy enterprise computing and the associated tools and technologies for things like configuration and virtual machine management, we knew orchestrating services and managing the cloud were the challenges cloud adopters would need help with going forward. Project Ensemble was going to be our answer.

Fast forward a year to early 2012. Project Ensemble had been publicly unveiled as, Juju, the Ubuntu Server team had fully adopted OpenStack and plans for the hugely popular Ubuntu Cloud Archive were in the works, and my role had expanded to Director of Ubuntu Server, covering the engineering activities of multiple teams working on Ubuntu Server, OpenStack, and Juju. The CDO was still covering IT operations, Launchpad, and Online Services, but now we had started discussing plans to transition our own internal IT infrastructure over to an internal cloud computing model, essentially using the very same technologies we expected our users, and Canonical customers, to depend on.  As part of the conversation on deploying cloud internally, our Ubuntu Server engineering teams started looking at tools to adopt that would provide our internal IT teams and the wider Ubuntu community the ability to deploy and manage large numbers of machines installed with Ubuntu Server. Originally, we landed on creating a tool based on Fedora’s Cobbler project, combined with Puppet scripts, and called it Ubuntu Orchestra. It was perfect for doing large-scale, coordinated installations of the OS and software, such as OpenStack, however it quickly became clear that doing this install was just the beginning…and unfortunately, the easy part.  Managing and scaling the deployment was the hard part. While we had called it Orchestra, it wasn’t able to orchestrate much beyond machine and application install. Intelligently and automatically controlling the interconnected services of OpenStack or Hadoop in a way that allowed for growth and adaptability was the challenge.  Furthermore, the ways in which you had to describe the deployments were restricted to Puppet and it’s descriptive scripting language and approach to configuration management…what about users wanting Chef?…or CF Engine?…or the next foobar configuration management tool to come about?  If we only had a tool for orchestrating services that ran on bare metal, we’d be golden….and thus Metal as a Service (MAAS) was born.

MAAS was created for the sole purpose of providing Juju a way to orchestrate physical machines the same way Juju managed instances in the cloud.  The easiest way to do this, was to create something that gave cloud deployment architects the tools needed to manage pools of servers like the cloud.  Once we began this project, we quickly realized that it was good enough to even stand on its own, i.e. as a management tool for hardware, and so we expanded it to a full fledged project.  MAAS expanded to having a verbose API and user-tested GUI, thereby making Juju, Ubuntu Server deployment, and Canonical’s Landscape product leverage the same tool for managing hardware…allowing all three to benefit from the learnings and experiences of having a shared codebase.

The CDO Evolves

In the middle of 2012, the current VP of CDO decided to seek new opportunities elsewhere.  Senior management took this opportunity to look at the current organizational structure of Core DevOps, and adjust/adapt according to both what we had learned over the past 3 1/2 years and where we saw the evolution of IT and the server/cloud development heading.  The decision was made to focus the CDO more on cloud/scale-out server technologies and aspects, thus the Online Services team was moved over to a more client focused engineering unit. This left Launchpad and internal IT in the CDO, however the decision was also made to move all server and cloud related project engineering teams and activities into the organization. The reasoning was pretty straight-forward, put all of server dev and ops into the same team to eliminate “us vs them” siloed conversations…streamline the feedback loop between engineering and internal users to accelerate both code quality and internal adoption.  I took a career growth decision to apply for the chance to lead the CDO, and was fortunate enough to get it, and thus became the new Vice President of Core DevOps.

My first decision as new lead of the CDO was to change the name.  It might seem trivial, but while I felt it was key to keep to our roots in DevOps, the name Core DevOps no longer applied to our organization because of the addition of so much more server and cloud/scale-out computing focused engineering.  We had also decided to scale back internal feature development on Launchpad, focusing more on maintenance and reviewing/accepting outside contributions.  Out of a pure desire to reduce the overhead that department name changes usually cause in a company, I decided to keep the acronym and go with Cloud and DevOps at first. However, then the name (and quite honestly the job title itself) seemed a little too vague…I mean what does VP of Cloud or VP of DevOps really mean?  I felt like it would have been analogous to being the VP of Internet and Agile Development…heavy on buzzword and light on actual meaning.  So I made a minor tweak to “Cloud Development and Operations“, and while arguably still abstract, it at least covered everything we did within the organization at high level.

At the end of 2012, we internally gathered representation of every team in the “new and improved” CDO for a week long strategy session on how we’d take advantage of the reorganization. We reviewed team layouts, workflows, interactions, tooling, processes, development models, and even which teams individuals were on.  Our goal was to ensure we didn’t duplicate effort unnecessarily, share best practices, eliminate unnecessary processes, break down communication silos, and generally come together as one true team. The outcome resulted in some teams broken apart, some others newly formed, processes adapted, missions changed, and some people lost because they didn’t feel like they fit anymore.

Entering into 2013, the goal was to simply get work done:

  • Work to deploy, expand, and transition developers and production-level services to our internal OpenStack clouds: CanoniStack and ProdStack.
  • Work to make MAAS and Juju more functional, reliable, and scalable.
  • Work to make Ubuntu Server better suited for OpenStack, more easily consumable in the public cloud, and faster to bring up for use in all scale-out focused hardware deployments
  • Work to make Canonical’s Landscape product more relevant in the cloud space, while continuing to be true to its roots of server management.

All this work was in preparation for the 14.04 LTS release, i.e. the Trusty Tahr. Our feeling was (and still is) that this had to be the release when it all came together into a single integrated solution for use in *any* scale-out computing scenario…cloud…hyperscale…big data…high performance computing…etc.  If a computing solution involved large numbers of computational machines (physical or virtual) and massively scalable workloads, we wanted Ubuntu Server to be the defacto OS of choice.  By the end of last year, we had achieved a lot of the IT and engineering goals we set, and felt pretty good about ourselves.  However, as a company we quickly discovered there was one thing we left out in our grand plan to better align and streamline our efforts around scale-out technologies….professional delivery and support of these technologies.

To be clear, Canonical had not forgotten about growing or developing our teams of engineers and architects responsible for delivering solutions and support to customers. We had just left them out of our “how can we do this better” thinking when aligning the CDO. We were initially focused on improving how we developed and deployed, and we were benefiting from the changes made.  However, now as we began growing our scale-out computing customer base in hyperscale and cloud (both below and above), we began to see that same optimizations made between Dev and Ops, needed to be done with delivery. So in December of last year, we moved all hardware enablement and certification efforts for servers, along with technical support and cloud consultancy teams into the CDO.  The goal was to strengthen the product feedback loop, remove more “us vs them” silos, and improve the response times to customer issues found in the field.  We were basically becoming a global team of scale-out technology superheroes.

TeamCDO

It’s been only 3 months since our server and cloud enablement and delivery/support teams have joined the CDO, and there are already signs of improvement in responsiveness to support issues and collaboration on technical design.  I won’t lie and say it’s all been butterflies and roses, nor will I say we’re done and running like a smooth, well-oiled machine because you simply can’t do that in 3 months, but I know we’ll get there with time and focus.

So there you have it.

The Cloud Development and Operations organization in Canonical is now 5 years strong.  We deliver global, 24×7 IT services to Canonical, our customers and Ubuntu community.  We have engineering teams creating server, cloud, hyperscale, and scale-out software technologies and solutions to problems some have still yet to even consider.  We deliver these technologies and provide customer support for Canonical across a wide range of products including Ubuntu Server and Cloud.  This end-to-end integration of development, operations, and delivery is why Ubuntu Server 14.04 LTS, aka the Trusty Tahr, will be the most robust, technically innovative release of the Ubuntu for the server and cloud to date.

Screw the Ubuntu Edge…We’re Giving Away $30,000!!!

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100’s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50 :-).

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ‘juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware. 😉

Keep Calm, Juju is still F*cking Awesome!

Keep Calm, Juju is still FUCKING AWESOME!

Doing cloud since 2008

Doing cloud since 2008

Lock-In: Why Your OS Choice Matters in the Cloud

Public Cloud Lock-In

I ran across an article last week about the fear of cloud lock-in being a “key concern of companies considering a cloud move“.  The article was spot on in pointing out that dependence upon some of the higher level public cloud service features hinders a user’s ability to migrate to another cloud.  There is a real risk in being locked into a public cloud service, not only due to dependence on the vendor’s services, but also the complexity and costs of trying to move your data out.  The article concludes by stating that there “aren’t easy answers to this problem“, which I think is true…but I also think by simply keeping two things in mind, a user can do a lot to mitigate the lock-in risk.

1. Choose an Independently Produced Operating System

Whatever solutions you decide to deploy, it’s absolutely critical that you choose an operating system not produced by the public cloud provider.  This recent fad of public cloud providers creating their own specific OS is just history repeating itself, where HP-UX, IRIX, Solaris, and AIX are being replaced with the likes of GCEL and Amazon Linux.  Sure, the latter are Linux-based, but just like the proprietary UNIX operating systems of the past, they are developed internally, only support the infrastructure they’re designed for, and are only serviceable by the company that produces them.  Of course the attraction to using these operating systems is understandable, because the provider can offer them for “free” to users desiring a supported OS in the cloud.  They can even price services lower to customers who use their OS as an incentive and “benefit”, with the claim it allows them to provide better and faster support.   It’s a perfect solution….at first.  However, once you’ve deployed your solution to a public cloud vendor-specific OS, you have made a huge first step towards lock-in.  Sure, the provider can say their OS is based on an independently produce operating system, but that means nothing once the two have diverged due to security updates and fixes, not to mention release schedules and added features.  There’s no way the public cloud vendor OS can keep up, and they really have no incentive to, because they’ve already got you….the longer you stay on their OS, the more you will depend on their application and library versions, thus the deeper you get.  A year or two down the road, another public cloud provider pops up with better service and/or prices, but you can’t move without the risk of extended downtimes and/or loss of data, in addition to the costs of paying your IT team the overtime it will take to architect such a migration.  We’ve all been here before with proprietary UNIX and luckily Linux arrived on the scene just in time to save us.

2. Choose an Operating System with Service Orchestration Support

Most of the lock-in features provided by public clouds are simply “Services as a Service”, be it a database service,  big data/mapreduce service, or a development platform service like rails or node.  All of these services are just applications easily deployed, scaled, and connectable to existing solutions.  Of course it’s easy to understand the attraction to using these public cloud provider services, because it means no setup, no maintenance, and someone else to blame if s**t goes sideways with the given service.  However, again by accepting these services, you are also accepting a level of lock in.  By creating/adapting your solution(s) to use the load balancing, monitoring, and/or database service, you are making them less portable and thus harder/costlier for you to migrate.  I can’t blame the providers for doing this, because it makes *perfect* sense from a business perspective:

I’m providing a service that is commoditized…I can only play price wars for so long….so how can I keep my customers once that happens….services!  And what’s more, I don’t want them to easily use another cloud, so I’ll make sure my services require you to utilize my API….possibly even provide a better experience on my own OS.

Now I’m not saying you shouldn’t use these services, but you should be careful of how much of them you consume and depend on.  If you ever intend or need to migrate, you will want a solution that covers the scenario of the next cloud provider not having the same service…or the service priced at a higher rate than you can afford…or the service quality/performance not being as good.  This is where having a good service orchestration solution becomes critical, and if you don’t want to believe me…just ask folks at IBM or OASIS.  And for the record, service orchestration is not configuration management….and you can’t get their by placing a configuration management tool in the cloud.  Trying to get configuration management tools to do service orchestration is like trying to teach a child to drive a car.  Sure, it can be done pretty well in a controlled empty parking lot…on a clear day.  However, once you add unpredictable weather, pedestrians, and traffic, it gets real bad, real quick.  Why?  Because just like your typical configuration management tool, a child lacks the intelligence to react and adapt to the changing conditions in the environment.

Choose Ubuntu Server

Obviously I’m going to encourage the use of Ubuntu Server, but not just because I work for Canonical or am an Ubuntu community member, but because I actually believe it’s currently the best option around.  Canonical and Ubuntu Server community members have put countless hours and effort into ensuring Ubuntu Server runs well in the cloud, and Canonical is working extremely hard with public cloud providers to ensure our users can depend on our images and public cloud infrastructure to get the fastest, cheapest, and most efficient cloud experience possible.   There’s much more to running well in the cloud than just putting up an image and saying “go!”.   Just to name a few examples: there’s insuring all instance sizes are supported, adding in-cloud mirrors across regions and zones to ensure faster/cheaper updates, natively packaging API tools and hosting them in the archives, updating images with SRUs to avoid costly time spent updating at first boot, daily development images made available, and ensuring Juju works within the cloud to allow for service orchestration and migration to other supported public clouds.

Speaking of Juju, we’ve also invested years (not months….YEARS) into our service orchestration project, and I can promise you that no one else, right now, has anything that can come close to what it can do.  Sure, there are plenty of people talking about service orchestration…writing about service orchestration….and some might even have a prototype or beta of a service orchestration tool, but no one comes close to what we have in Juju…no one has the community engagement behind their toolset…that’s growing everyday.  I’m not saying Juju is perfect by any means, but it’s the best you’re going to find if you are really serious about doing service orchestration in the cloud or even on the metal.

Over the next 12 months, you will see Ubuntu continue to push the limits of what users can expect from their operating system when it comes to scale-out computing.  You have already seen what the power of the Ubuntu community can do with a phone and tablet….just watch what we do for the cloud.

Can Ubuntu Server Roll Too?

Wow…I just realized how long it’s been since I did a blog post, so apologies for that first off.  FWIW, it’s not that I haven’t had any good things to say or write about, it’s just that I haven’t made the time to sit down and type them out….I need a blog thought transfer device or something :-).  Anyway, with all the talk about Ubuntu doing a rolling release, I’ve been thinking about how that would affect Ubuntu Server releases, and more importantly….could Ubuntu Server roll as well?  In answering this question, I think it comes down to two main points of consideration (beyond what the client flavors would already have to consider).

 

How Would This Affect Ubuntu Server Users?

We have a lot of anecdotal data and some survey evidence that most Ubuntu Server users mainly deploy the LTS.  I doubt this surprises people, given the support life for an LTS Ubuntu Server release is 5 years, versus only 18 months for a non-LTS Ubuntu Server release.  Your average sysadmin is extremely risk adverse (for good reason), and thus wants to minimize any risk to unwanted change in his/her infrastructure.  In fact, most production deployments also don’t even pull packages from the main archives, instead they mirror them internally to allow for control of exactly what and when updates and fixes roll out to internal client and/or server machines.  Using a server operating system that requires you to upgrade every 18 months, to continue getting fixes and security updates, just doesn’t work in environments where the systems are expected to support 100s to 1000s of users for multiple years, often without significant downtime. With that said, I think there are valid uses of non-LTS releases of Ubuntu Server, with most falling into two main categories: Pre-Production Test/Dev or Start-Ups, with the reasons actually being the same.  The non-LTS version is perfect for those looking to roll out products or solutions intended to be production ready in the future.  These releases provide users a mechanism to continually test out what their product/solution will eventually look like in the LTS as the versions of the software they depend upon are updated along the way.  That is, they’re not stuck having to develop against the old LTS and hope things don’t change too much in two years, or use some “feeder” OS, where there’s no guarantee the forked and backported enterprise version will behave the same or contain the same versions of the software they depend on.  In both of these scenarios, the non-LTS is used because it’s fluid, and going to a rolling release only makes this easier…and a little better, I dare say.  For one, if the release is rolling, there’s no huge release-to-release jump during your test/dev cycle, you just continue to accept updates when ready.  In my opinion, this is actually easier in terms of rolling back as well, in that you have less parts moving all at once to roll back if needed.  The second thing is that the process for getting a fix from upstream or a new feature is much less involved because there’s no SRU patch backporting, just the new release with the new stuff.  Now admittedly, this also means the possibility for new bugs and/or regressions, however given these versions (or ones built subsequently) are destined to be in the next LTS anyway, the faster the bugs are found out and sorted, the better for the user in the long term.  If your solution can’t handle the churn, you either don’t upgrade and accept the security risk, or you smoke test your solution with the new package versions in a duplicate environment.  In either case, you’re not running in production, so in theory…a bug or regression shouldn’t be the end of the world.  It’s also worth calling out that from a quality and support perspective, a rolling Ubuntu Server means Ubuntu developers and Canonical engineering staff who normally spend a lot of time doing SRUs on non-LTS Ubuntu Server releases, can now focus efforts on the Ubuntu Server LTS release….where we have a majority of users and deployments.

 

How Would This Affect Juju Users?

In terms of Juju, a move to a rolling release tremendously simplifies some things and mildly complicates others.  From the point of view of a charm author, this makes life much easier.  Instead of writing a charm to use a package in one release, then continuously duplicating and updating it to work with subsequent releases that have newer packages, you only maintain two charms…maximum of three if you want to include options for running code from upstream.  The idea is that every charm in the collection would default to using packages from the latest Ubuntu Server LTS, with options to use the packages in the rolling release, and possibly an extra option to pull and deploy direct from upstream.  We already do some of this now, but it varies from charm to charm…a rolling server policy would demand we make this mandatory for all accepted charms.  The only place where the rules would be slighlty different, are in the Ubuntu Cloud Archives, where the packages don’t roll, instead new archive pockets are created for each OpenStack release.  From a users perspective, a rolling release is good, yet is also complicated unless we help…and we will.  In terms of the good, users will know every charmed service works and only have to decide between LTS and rolling as the deployment OS, where as now, they have to choose a release, then hope the charm has been updated to support that release.  The reduction in charm-to-release complexity also allows us to do better testing of charms because we don’t have to test every charm against oneiric, precise, raring, “s”, etc, just precise and the rolling release….giving us more time to improve and deepen our test suites.

With all that said, a move to a rolling Ubuntu Server release for non-LTS also adds the danger of inconsistent package versions for a single service in a deployment.  For example, you could deploy a solution with 5 instances of wordpress 3.5.1 running, we update the archive to wordpress 3.6, then you decide to add 3 more units, thus giving you a wordpress service of mixed versions….this is bad.  So how do we solve this?  It’s actually not that hard.  First, we would need to ensure that Juju never automatically adds units to an existing service if there’s a mismatch in the version of binaries between the currently deployed instances and the new ones about to be deployed.  If Juju detected the binary inconsistency, it would need to return an error, optionally asking the user if he/she wanted it to upgrade the currently running instances to match the new binary versions.  We could also add some sort of –I-know-what-I-am-doing option to give the freedom to those users who don’t care about having version mismatches.  Secondly, we should ensure an existing deployment can always grow itself without requiring a service upgrade.   My current thinking around this is that we’d create a package caching charm, that can be deployed against any existing Juju deployment.  The idea is much like squid-deb-proxy (accept the cache never expires or renews), where the caching instance acts as the archive mirror for the other instances in the deployment, providing the same cached packages deployed in that given solution.  The package cache should be ran in a separate instance with persistent storage, so that even if the service completely goes down, it can be restored with the same packages in the cache.

 

So…Can Ubuntu Server Roll?

Yes We Can!

I honestly think we can and should consider it, but I’d also like to hear the concerns of folks who think we shouldn’t.