Thursday, October 13, 2016

Open source and OpenShift in government with Red Hat's David Egts

Red Hat's Chief Technologist for the North American public sector, David Egts, sat down with me to discuss some of the trends he's seeing in the public sector. In addition to being a podcaster himself (The Dave and Gunnar Show), David has years of experience working with government and related public sector organizations at all levels. In this show, he shares some of the trends he's been seeing around open source (such as the White House open source policy), the collaboration around OpenSCAP, how OpenShift is being used to manage containers, and the upcoming Red Hat Government Symposium in Washington DC.

Show notes:

MP3 audio (18:01)
OGG audio (18:01)


Gordon Haff: Today I'm joined by David Egts who's the Chief Technologist for the North American Public Sector at Red Hat. He's going to have some great insights to share with us about how government, at various levels, is adopting cloud and container technology.
Welcome, David.
David Egts:  Hey, Gordon. Glad to be here. A big fan of the show, so it's great to finally be on it after all the episodes I've listened to. Thanks for having me.
Gordon:  I should mention at this point, and we'll have a link in the show notes, that David is the co‑host with Gunnar Hellekson of his own podcast. Tell us a little bit about your podcast.
David:  It's "The Dave and Gunner Show." If people go to you could hear the podcast where I interview a bunch of people in the open source community, people at Red Hat.
A lot of the time Gunnar and I will just get on and we'll just talk about the tech news of the day, and parenting, and all kind of other fun things like that. I do have to admit, though, the podcast wouldn't exist if it wasn't for yours being the inspiration to get things going, so thank you for all the work you've done.
Gordon:  Thanks, David. We're going to talk about a number of cloud, and government, and policy things on this show, but let's start talking about something specific. Namely, that's container adoption in the government, specifically around Red Hat OpenShift.
David: In Public Sector, OpenShift interest taking off like crazy. I think the reason for it is that the folks in government that I've been talking to, when we talk about having a container strategy, they know they want to have one, but they often don't have the time or the resources to be able to roll their own container platform themselves.
They see all of this really hot innovation coming out of open source communities and all this hot software coming out of Silicon Valley from a lot of start‑ups. Then they see products like OpenShift Container Platform, which builds upon things like docker, builds on Kubernetes, and they see that as an integrated solution. They really are flocking to embrace it.
They're a bunch of customer success stories that we have that we can talk about that are really fun.
Gordon:  Let's get to those in a second. I did want to just make one point to your point about essentially making container adoption easy. This really is not just a government type of thing. We see this at a lot of customers who start out, "Whoa, if Google can do it themselves, we can do it ourselves, too." They go through an iteration and find this isn't really that easy to do.
David:  No, absolutely. Then also you end up building this snowflake that you can't put an ad in the paper and hire somebody to do this, or send them somewhere for training. You incur all this technical debt. Whereas, if you have an engineered solution that you can get training for or you could hire somebody for, it's really, really powerful.
A lot of people really focus on the mission of what they're working on.
Gordon:  Tell us some specific examples that you've been working on and that you can talk about there, out in the field.
David:  Yeah, one of my favorite ones. I actually did a podcast on The Dave and Gunner Show. We interviewed the Carolina CloudApps folks, the team at University of North Carolina. They're providing OpenShift as a service to all of the students, and faculty, and researchers at UNC.
It's really neat to see a bunch of the things that they're doing with, as far as container densities that they're getting. They're running over a hundred apps per container host. Where, if you think about that in the traditional virtualization base, getting like a 10:1 ratio of virtualized systems per hypervisor was great, but to get 100:1 is just amazing.
Then there are other things, too, as far as the range of people that they have to work with where it's like 18‑year‑old students that are just brand new freshmen to people approaching their retirement years in the faculty.
Being able to come up with documentation, and building a community, and getting people to adopt the software in a very easy way was a really neat challenge for them, which I thought was pretty amazing. Then the last thing that I thought was really neat was the whole thing.
For any sort of IT organization, you need to be very, very compelling or risk being replaced by Shadow IT by providing something like a container platform, like Carolina CloudApps does.
That allows them to be really relevant and deliver a lot of value to the students, and faculty, and the researchers to prevent them from even considering going with something from a third party or spinning up something in your dorm room.
Gordon:  What are some of the lessons that you would say that you've learned, that Red Hat's learned, that the customers have learned as we've gone through this process of what's rather a new set of technologies?
David:  I think security is one of the big things that I've found out. Just because people are moving into containers and you're sticking everything into a container, the security burden shifts from being mostly the responsibility of the operations team to being a shared responsibility between the development and the operations team.
You can't just flip a container over the wall, hand it to ops, and then have it go into production. It can't be these black box containers you give over. You need to move some of that security discipline over to the development side, so in the CICD processes the same way that you do unit tests to make sure that your code behaves properly.
You also want to do security tests as part of your unit test workloads.
Gordon:  As I've been writing about security over the last maybe six months or so ‑‑ and I've been doing a fair bit about it ‑‑ one of the things that's really struck me is the evolution in thinking about security.
I think we kind of came from a point where, on the one hand, you had people that were like, "Oh, clouds are insecure. We can't use clouds." Then, on the other hand, people would be like, "Oh. Well, we'll just use a public cloud provider, and we don't need to worry about security any longer."
You had these kind of extreme viewpoints, and I think it's actually good that ‑‑ from talking to people and reading things, and working through these deployments ‑‑ most people, I won't say everyone ‑‑ but most people seem to be thinking about security more intelligently and more thoughtfully.
David:  Yeah, and it's also one of the things that I see, too, is that in the past, in the Federal government, you would have maybe annual audits or these periodic audits where, "We're gonna see if we've drifted from our security baseline."
The reality is that your adversaries, they're not going to attack you once a year. They're attacking you multiple times a day. Being able to automate your scanning, and being able to make sure that you haven't drifted from your security baseline, and being able to rapidly snap back into it is really, really powerful.
That's where tools like the atomic scan tools that we've integrated into our OpenShift are really compelling where we work with partners like Black Duck and Sonatype, even SCAP where we can do just DISA STIG for containers and make sure that they're locked down properly. It's really, really exciting work.
Gordon:  You've mentioned automation. Let's talk a little bit more about automation because, from what I've been seeing, automation is really the heart of how a lot of these organizations are evolving. They're really starting to think about, "What can I automate next? What's the next low‑hanging fruit that I can basically...don't have to worry about any longer?"
David:  Yeah, and that's where, what is it, people spend 80 percent of their budgets on keeping the lights on and that leaves 20 percent for innovation. But, there's a lot of time when you have these patch‑Tuesdays, and everybody's on this patching hamster wheel. It's like they spend all month patching and, before you know it, it's patch‑Tuesday again.
You're just doing this over, and over, and over again, and there's absolutely no time for doing any sort of innovation at all. That's where, if you can, automate things like security, automate your build processes. Whenever things can be automated, they should be automated.
There's an article that I wrote where I actually saw an interview that was done with Terry Halvorsen, who's the CIO of the DoD. He was giving a press interview, saying that the number one driver for data center consolidation in the DoD is labor costs and that, basically, automation is the key to help drive down those labor costs and if anything that can be automated should be automated.
That really underscores that point of you really need to be able to automate as much as possible if you want to do any sort of innovation.
Gordon:  That's really just the cost side of things. In areas like security, for example, you can really increase the quality because not only is it taking you less work to do these manual repeated tasks, but if it's automated you can be pretty sure that it's going to happen the same way the hundredth time that it happens the first time. You're not going to make a mistake in there that creates a vulnerability for an attack.
David:  Yeah, and your checks could be a lot more robust and a lot richer, too. If I had a human that is locking down a system, there's only so many checks that that human can do per hour.
But, if I can make it machine readable, where I'm using tools like SCAP or I'm using tools like Ansible that can just go through, and I can have a lot more rules and a lot more checks and have this defense in depth.
Gordon:  Let's switch gears a little bit here to talk about policy. One of the really big changes in the last few years has been the fact that government, at multiple levels, is really starting to think about open source systematically and, in some ways, perhaps embracing it more systematically than many private organizations.
David:  It'll be 10 years for me in February, when I joined Red Hat. I remember 10 years ago I would go into meetings and people were wondering if this whole open source thing's going to take off to now, to the point where, back in the day, open source was the insurgent, now it's the incumbent, where people in the government are huge consumers of open source.
We're proud to say that every tactical vehicle in the US Army is running at least one piece of open source software from Red Hat. You can go down the line with every agency. All 50 states are running Red Hat products or using open source technologies in a commercially supported way. I think that the pendulum is even swinging further from being a consumer to being a contributor and a collaborator.
We've done a lot of work as part of the open source community with the SCAP Security Guide where we've partnered with NSA, and DISA, and NIST, and all kind of other integrators, and government agencies, and folks from academia to do security baselines in an open source way. That has been very exciting to be able to come out with security baselines a lot faster than doing it yourself.
Also, the other thing that I'm seeing, too, is that the White House just released the OMB open source policy guidance where they talk about all of the custom‑written code and that the government pays for. First off, it should be reusable by all of the agencies.
They also have the same goal over the next three years to open source 20 percent of that code and then do an analysis to see if this is working out well and all that. It was really neat to see the evolution of the draft policy come out in the final policy where all of that glueware that the government is paying government employees or integrators to implement.
They really want to reuse that as much as possible instead of reinventing the wheel over and over again. To me, that's really exciting.
Gordon:  Yeah, and, of course, a lot of the new policies even go beyond open source in terms of having open data, in terms of research that's paid for with taxpayer money, should be publicly available and so forth. Obviously, there's still a lot of work that needs to go into many of those areas, but it's certainly trending in a good direction.
David:  No, absolutely. I'm really excited by it.
Gordon:  If somebody wants to learn more about what Red Hat's doing in government, what the government itself is doing in open source, how they can get involved, what's one or two good next steps they can take.
David:  I think one of the things that they should do is check out the Red Hat Government Symposium. If people go to, that's a short link to get to the registration site for that. That's our annual even that we have every year in DC. This year it is on November 2nd at the Ritz‑Carlton in Pentagon City.
This is going to be really exciting where, if you think about it, the following week is the presidential election. We have the open source policy that came out. There's going to be a lot of people wondering what's going to happen over the next 12 months and how policies that are in place now will evolve over time.
It's going to be a great opportunity to network with folks where we're going to have Mike Hermus, who's the CTO of Department of Homeland Security, is going to give a keynote. We're going to have a lot of executives from Red Hat giving keynotes, like Tim Yeaton and Ashesh Badani. I'm really excited about the events that are coming out. Please, come check that out.
Gordon:  That's great, Dave. I just find it so interesting. The government often gets this reputation for being kind of a decade behind everyone else. In a lot of respects an open source policy, open data policy opened organizational openness in general. The government, in some ways, I think is ahead of a lot of the private sector.
David:  I wouldn't argue that. A concrete example of that is the SCAP work that we've been doing as part of the SCAP Security Guide. SCAP was something that was started by NIST, the National Institute of Standards in Technology. There are a lot of commercial organizations like Microsoft, and Red Hat, and others that got along to come up with SCAP policy that's machine readable.
I remember going back to our engineering organization and saying, "You know, we got to get this inside of our products," and we get them saying, "Oh, no. The addressable market for that is just government nerds."
Now it's to the point where people are developing PCI compliance policy as part of the SCAP Security Guide. We have contributions the world over. From what I understand, Lufthansa will run an SCAP scan every time they turn their planes on with the in‑flight entertainment system. It's really exciting to see that type of change moving on.
At the Red Hat Summit, over the past couple years, we would do SCAP sessions where Shawn Wells, who would give the presentation. He would pull the audience over the last couple years. It's like, "OK, how many people are from commercial and how many people are from Public Sector?"

A couple years ago it was like 80 percent Public Sector, and this year the poll was 85 percent commercial. It's really interesting to see how a lot of this innovation that has happened in government has actually made it for the benefit of private industry, which, to me, is a really good use of taxpayer dollars.

Tuesday, September 06, 2016

Red Hat's Jen Krieger on DevOps feedback loops

The focus on metrics and feedback loops in software development tends to be around technical measurements like uptime. Or, if we're being really clever, the business outcomes associated with those numbers. In this episode, Red Hat Chief Agile Architect Jen Krieger argues that the human side of feedback loops may be the most important.

Show notes:

MP3 audio (14:55)
OGG audio (14:55)


Gordon:  Hi, everyone. I'm here with Jen Krieger, the Chief Agile Architect with Red Hat again. In the last podcast, we were talking about distributed teams and in general how to make distributed teams work effectively and presumably leading to get better outcomes in terms of delivering software, because that's the name of the game after all.
I'd like to follow up with Jen with something else that we were discussing came out of the Agile 2016 Conference. That was the idea of feedback loops. I've been talking alone with my colleague William Henry at some events about this idea of metrics. How do you know if your DevOps is working?
The focus tends to be measuring the throughput per second or the latency or the number of successful deploys or the down time. Those are important as are the business outcomes that stem from them.
For all the talk we have about the culture in DevOps though, we don't talk an awful lot about how you monitor how the team is doing, both from a day to day basis and in a longer term. What are some of your thoughts on that, Jen?
Jen:  Yes, it is absolutely true that when we think about the words DevOps, and we think about feedback loops, we immediately go to the space in which we're thinking about some sort of computer somewhere giving us some sort of data that will help us make a decision about some sort of thing that we're stumbling over, but we rarely, if ever, think about the fact that every single thing that we do with other people is a feedback loop, having visual confirmation of your idea.
Say you're in a meeting and you're sitting with a bunch of people. You are saying, "What do you think about my idea?" Somebody sitting across the table leans in and maybe crosses their arms and raises an eyebrow.
You might take this as a visual form of feedback and not really know what to do with it. There are all these other methods of feedback that we're getting on a daily basis that we have zero idea of what to do with.
I recently wrote an article for where I was talking about feedback loops. I was talking about the idea that for everybody in the DevOps space ‑‑ and this might be a very salacious thing ‑‑ you just need to stop thinking about the feedback loops that you're worried about at work.
All the computer feedback loops, and all those things that come out of a machine, you just need to stop worrying about them right now. You need to start focusing on the human feedback that you're getting and the human feedback loops that you're trying to cultivate, because, I guarantee you, those feedback loops are about 90 times harder to resolve than whatever's going on with your computers.
If your production server is down, don't ignore that. I assure you, you can resolve [laughs] that problem.
If you are having a fight or a negative interaction with somebody that you're working with, I guarantee you that, if you cannot figure out how to deal with that feedback loop, you will continue to not be able to deal with that feedback loop, and it probably will also impact other relationships that you have around that particular interaction.
One of the things that I've been kind of thinking about lately that might help folks processing feedback is to also recognize that we live in a digital age where there is so much feedback coming in that sometimes it's hard to identify what feedback we actually need to listen to.
We were talking last podcast about distributed teams, and the nature of IRC, and the amount of information that comes in across just one form of communication. If you are a person who likes to participate in open source communities, email will be a problem. Trying to keep up with email from a popular open source project like kubernetes is impossible.
It's just impossible. You can't do it. So at some point you have to start identifying what feedback is the right feedback to focus on. That is going to also be critical to your addressing the entire loop.
Gordon:  What you say there in terms of having all these numbers, metrics, what have you, the fact is it's true from a technical standpoint, too. I was at the DevOpsDays and I think it was London earlier in the year, and we were talking about metrics. We were having an OpenSpace about metrics. What do you really want to measure?
One of the participants, he made the comment that his company, they used to get this monthly report that tracked, I think it was 2,000 something metrics associated with their IT systems, and nobody ever looked at this thing.
Even if we're just talking about the technical aspect of things, much less the entire system, it includes people. It's really important to think about what matters. Maybe you won't log all that other stuff, and use Splunk, and maybe do predictive analytics, but it's not something you should be focusing your attention on, on a day‑to‑day basis.
Jen:  The win here is not the number of data points that you're logging. The win is, "If you're logging the data, is your company actually using the data that you are logging?" Because, frankly, logging data costs you money, and it costs you a lot of money. It's not for free. If you are using Splunk, you pay them based on the amount of data that you're storing.
Yeah, [laughs] just to produce a report that no one is doing anything with is not the ideal feedback loop.
Gordon:  It may have been the same event. Somebody from ‑‑ I think it was Google ‑‑ made the comment that data you don't use has a negative ROI.
Jen:  A couple years ago we were talking about that. It was something like black data or dark data, or something like that. [laughs] It was ridiculous buzzword. In any case, the most critical part about feedback loops is not just receiving the data, it's understanding what to do with it, which is what I was alluding to before.
You can be getting a bunch of information coming in, so data from somewhere is the first step. You're actually getting some sort of feedback from something, whether it be human or machine. Identifying which data you want to respond to is important. Figuring out what it means when something happens is important.
Then, somehow, emotionally tying value, and that's an individual thing. The example I used in my blog article was the fact that my husband...he's a video gamer, and he plays a game called "Dark Souls." He continues to run around in this game and die over and over and over again.
We were having a simple conversation in which he told me that he had a lot of souls, which is the monetary system in the game. I said to him, "If you die, you're going to lose all your money."
He's like, "Oh, no, no, no. That won't happen." I was like, "Sure," but I knew for sure that he was going to die again. Sure enough, a day later I heard him cursing in the basement about this. I said, "Sure enough." He died and he lost all his souls.
I looked at that situation and I thought, "Well, gosh. I'm thinking about feedback loops, and I should assign that whole thought process that I've been having on feedback loops and figure out how I might have compelled him to change his behavior in order to not be in the situation right now where he's angry in the basement."
I thought, "Well, instead of me just saying, 'you're gonna die,' I could have said something to the tune of, 'You've got a lot of money, and there's all these things that you can buy. You could upgrade your character. You could do all these things with it. What are the things that you want, that you haven't done yet? You don't have to spend it all.'"
I know he's frugal, which is probably why he's not spending it. I thought, "You can spend half of it, and just go ahead. It's OK." He could have actually spent half of it and gotten a brand new sword or something and been happier for it, but he didn't.
It's the lack of assigning some sort of emotional value to it, or emotional feeling to it that prevents people from understanding feedback, or really participating in a loop.
If you go all the way back to that early conversation about stand up and where it becomes just a status meeting for everybody, it's because no one is assigning, or at least buying into, the idea that it’s actually going to improve the situation for the team. No one really understands how it does that. No one really sees the value of participating in the activity, and, therefore, they don't.
That is the fundamental part in the feedback loop when things kind of go south. You can monitor the heck out of your computer systems, but, if you fundamentally don't care whether they're up or down, it doesn't matter. It just doesn't. That's when you can...sure, monitoring what data you should be doing on your systems, throughput, all that good stuff is really critical.
"What is the most valuable data you should be capturing?" means absolutely nothing if you don't care about the data. Caring about the data is a human quality. You can't make somebody care about it, they just have to want to.
Gordon:  Talking about teams and for the help of teams, what are some of the things as Agile Architect, Agile Coach that you really think about, that you really keep your eye on?
Jen:  The engagement of teams. I tend to pop in and out of a lot of team meetings. I keep my eye on a lot of different people. We, as an industry, have an engagement problem. We have a bunch of people who are incredibly highly paid thought workers who are probably taken advantage of a little bit.
I don't mean that in a way that we should have higher egos, but we are also ‑‑ especially in the United States ‑‑ in a situation where we'll just work ourselves to death, and we're just going to keep working. Not a lot of people pay a whole lot of attention to the boundaries that one would hope to expect from work.
Some of the simple things I look for from teams is to see whether they're going to fall down is whether or not they're aggressively answering email over the weekend. Whether or not I get into a meeting with them and they immediately dive into a heated argument or a stilted conversation,
Or do they spend maybe the first couple minutes just chatting about whatever. It could be chatting about something that is completely unrelated to what we came to talk about, or they could be excitedly talking about some technology. That's fine, too, but they're having an interaction where most of the people on the phone are actually participating. I'm basically looking for engagement. How engaged are people?
For video conferencing, are people smiling? Do they look like they're going to fall out of their chairs because they're so tired? I'm looking for that kind of stuff. Also, too, a lot of my job is just making friendships with people so that I can notice when things aren't going well.
It's not always like a science, and that's part of the problem in this, because the monitoring of computers is really easy. You can set parameters and universally use them. Regardless of what company you work at or what server you're using, it's pretty much universal, but humans are so different.
If I've got one engineer who I know is never going to smile because it's just his personality, I can tell when he's angry about things because I know him, and I know for other things to look at, but if I got an engineer that I know is smiling all the time and suddenly she's not smiling anymore, then I know there's something going on there and I can try to dig in a little bit and see if I can help her with something. It's really just about making human connection.
Gordon:  To close out this podcast, what advice would you give to our listeners to use feedback effectively in the context of people and teams?
Jen:  One of the most important things is do not participate in feedback loops just because somebody told you to. Hands down, the worst thing that you could possibly do. If you are being asked to do, maybe, your annual performance review, chances are I'm not going to tell you to stop participating in that.
That's probably something you need to do, but the other thing that you probably should consider is that, if you are already in a space where you're not going to listen to something that your boss or the person doing your performance review is going say to you, chances are you're not going to get any value from that at all.
There has to be some sort of connection you make to the information that you're getting. If you're not really participating in a, say, daily stand up because your team is already talking nonstop in other communication streams during the day, maybe you want to talk about, "Do we actually need to sync? Do we need to sync about work?"
Maybe it's not work that we have to sync about during that 15 minutes. Maybe it's just to get on the phone and say, "Good morning," to everybody, or see people's faces for a few minutes a day. Maybe it's not an enforced thing that you do just because you're told to.

Honestly, just don't do it if it's not providing value for you, but, on the other hand, make sure you understand the value that it's supposed to provide before you make any sudden decisions about it.