Posted on 01 July 2012 by Jeffrey Hulten in conferences, devops, devopsdays, community, and culture
Sitting on a plane on my way home from Velocity and DevOpsDays, I am thinking about the people I met, the things I learned, and the ways we need to improve communicating the power of this culture and community. Bethany and I had a great time (she was able to get a pass to attend the social events with me) talking with interesting folks including old friends like @mtnygard, @sascha_d, and the @seadevops crew.
The Velocity crew I met were all great (with a special shout-out to Patrick for his assistance and general awesomeness) but the conference did have some issues. I was a little late arriving for the keynotes on Tuesday and the main ballroom was standing room only. More specifically there was no good way to get to the empty seats spread throughout the hall and the chairs were so tightly packed that people felt the need to take an extra half seat each. I expect that the Velocity crew will need to evaluate the space and if they can make it work next year, but cramming more chairs into the room is not going to help.
There were a lot more women at Velocity this year compared to previous years according to others who have attended in the past. Attendees were well behaved and at least I did not see any sexism on the part of my fellow conference goers. Some of the vendors, however, did not get the memo. The new CTO of GoDaddy put his foot in his mouth with a “Sorry I couldn’t get any GoDaddy girls here today… Danica Patrick is stuck in traffic” joke that pissed off at least a few people. I had a good conversation with him and others from the GoDaddy booth the day before and they indicated that they had gotten the sexism/SOPA messages and were changing their style, but that was undermined by his performance. We will see what happens with their upcoming Olympics advertising campaign, but I am not hopeful.
And then there was Edgecast… Adrian Cockcroft from Netflix said it best:
Why does Edgecast torture boothbabes by making them balance on high heels all day? Embarrassing. No BBs at #velocityconf 2013 plz
On the flip side I also realize I need to watch my assumptions. Apparently one vendor happens to actually have three blonde, Swedish gals in their twenties working as interns. They are all persuing CS degrees and one is working on her thesis on gossip protocols. My assumptions were wrong and that is on my head.
As for the sessions, the talk by Dr. Richard Cook on how complex systems fail was very interesting. He told a story about a hospital that standardized on one manufacturer of infusion pump, used to deliver fluids and medications to patients. The new pumps were all introduced in a relatively narrow window of time and a year after the first pumps were put in service they started to fail. Specifically they would not take a new configuration. It started with one pump, early in the morning and within a few hours as much as 20% of the pumps were non-responsive.
The culprit? The pumps had a mandatory upgrade period and it seemed that it was appropriate at the time of configuration to set that to one year. When the upgrade period passed the pump would not take a new configuration. Once they figured it out they had to reconfigure all the pumps in the hospital. The configuration was made because it seemed to be the right thing to do, but the consequences were not fully understood.
Check out the full video from Velocity if this has piqued your interest.
Mike Christian from Yahoo! also had a great talk entitiled ”Frying Squirrels and Unspun Gyros”. When most engineers build a datacenter they add generators, uninteruptable power supplies, and other technologies to allow the systems to ride out a disaster. What we don’t take into account is the additional complexity and therefore additional risk this introduces. Everything from squirrels chewing electrical systems to cascading HVAC failures, sometimes the complexity we accept isn’t the complexity we need.
As I was getting on the plane I saw that Amazon Web Services had issues in one of their datacenters in Virginia. They tend to get flak but I think it is more that many high profile companies are in their datacenters and while they build the infrastructure they don’t control their customers’ architecture. Infrastructure fails no matter who builds it, but systems can be architected to be resilient to that failure. Practicing failover between datacenters regularly is one way. Using techniques that allow you to move traffic between locations in another. It is not simple, but architecting for the public cloud regardless of where your data is stored will give you more resiliency, provided you practice the diversion of traffic all the time.
Thursday and Friday were DevOpsDays in Mountain View. DevOpsDays is a combination of prepared talks and an unconference format that allows people to suggest topics and get some space to talk about them.
A lot of the conversations I had were about culture. We talked about how we can prevent a loss of ‘control’ over the term DevOps when the kinds of vendors who are always looking for the new bandwagon start circling. We talked about building a body of knowledge that anyone can contribute to. Mostly we bonded, shared stories, conspired against the future, built friendships, and drank beer.
I can’t wait until next year.
Are you ready to discuss how we can help you?