Friday, December 14, 2012

Enterprise application development lives!

When the notice for a panel about enterprise PaaS I would be appearing on went live a while ago, it attracted the attention of an anonymous commenter. The tone of the comment was about what you'd expect from an anon--which is to say of a tenor that could result in rather unpleasant repercussions if delivered in person. But, that aside, the substance of the remark is worth considering. To wit, is it true that, as this individual wrote, "PaaS for IT is complete 100% BS. All new applications are SaaS. Who is funding IT to build new applications?"

Platform-as-a-Service overview

It's not a completely risible opinion. After all, we don't need to look far to see great examples of Software-as-a-Service replacing packaged on-premise applications which, in turn, had often replaced largely bespoke software sometime in the past. Certainly, it's unlikely any business would write a payroll or benefits application for its own use and few enough would sensibly tackle custom customer relationship management given the existence of Salesforce.com, SugarCRM and others. Indeed, the idea that standardized functions can be largely commoditized is central to many cloud computing concepts more broadly.

But to extrapolate from such examples to the death of application development is to take an unfounded leap.

For one thing, it misunderstands a platform such as Salesforce.com. Yes, Salesforce is a SaaS used by countless enterprise sales forces and marketing teams to track customer contacts, forecasts, and sales campaigns. But that's the view from the perspective of the end user. From the perspective of independent software vendors and enterprise developers, Salesforce is a platform that can be extended in many ways. Just to give you an idea of scale, Dreamforce--Salesforce's annual developer conference--had over 90,000 attendees in 2012. That's a huge conference. Industry analyst Judith Hurwitz calls Force.com, the platform aspect of Salesforce, a "PaaS anchored to a SaaS environment.

Thus, even using a SaaS doesn't eliminate application development. In fact, it may enable and accelerate more of it by reducing or eliminating a lot of the undifferentiated heavy lifting and allow companies to focus on customizations that are specific to their industry, products, or sales strategy.

Another analyst, Eric Knipp of Gartner, states the case for ongoing application development even more strongly. He writes that "While I don’t debate that “the business” will have more “packages” to choose from (loosely referring to packages as both traditional deployed solutions and cloud-sourced SaaS), I also believe that enterprises will be developing more applications themselves than ever before." In fact, he goes so far as to call today "a golden age of enterprise application development."

The reason is that PaaS makes development faster, easier, and--ultimately--cheaper. But businesses don't have a fixed appetite for applications, which is to say business services that they can either sell or leverage to otherwise increase revenues or reduce costs. We're especially hearing a lot of talk around business analytics and "big data" today. Likewise for mobile. But, really, information and applications are increasingly central to more and more businesses, even ones that one didn't historically think of as especially high-tech or IT-heavy.

The companies that grew up on the Web have always had IT technology at their core. Nearly as well known are examples from companies that design and manufacture high technology components. Or financial services firms that depend on the latest and greatest hardware and software to rapidly price and execute trades. These types of businesses are cutting edge on the “3rd Platform,” as IDC calls it too—but that's what we've come to expect in these industries.
What's most different today is that the cutting edge IT story doesn't begin and end with such companies. Rather, it's nearly pervasive.

Media today is digital media. The vast server farms at animation studios such as Dreamworks are perhaps the most obvious example. And their computing needs have only grown as animation have shifted to 3D. But essentially all content is digitized in various forms. For example, sports clips are catalogued and indexed so that they can be retrieved at a moment's notice—whether for a highlights reel or a premium mobile offering, a huge monetization opportunity in any case.

How about laundry? Now, that's low tech. Yet Mac-Gray Corporation redefined laundry room management. It introduced LaundryView, which allows students/residents to monitor activity in their specific laundry rooms so they can see whether a machine is free or their laundry is done. It's been visited by 5 million people and the company has added on-line payment and service dispatch systems.

Agriculture is an industry that suggests pastoral images of tractors and rows of crops. Yet, seed producer Monsanto holds more than 15,000 patents for genetically-altered seeds and other inventions. (An area of intellectual property which may be controversial in some circles and geographies but which is no less striking for that.)

I could continue to offer examples both familiar and less so. However, the basic point is straightforward. Increasingly, information technology isn't something that is primarily important to a few industries and uses. Rather it's permeating just about every place whether it's about creating new types of services, better connecting to customers, increasing efficiency, delivering better market intelligence, or creating better consumer experiences.

And that means businesses will need to leverage platforms, of which Red Hat's OpenShift is a great example, that streamline their development processes and make it possible to more quickly and economically create the applications they need to be competitive. Will they leverage pure SaaS too? (And, for that matter, public cloud services such as provided by the likes of Amazon and Rackspace?) Sure. The focus should be on differentiating where differentiating adds value, not spending time and resources on me too plumbing. 

But that's actually what PaaS is best at. Making it easier for developers to focus on applications, not infrastructure. Enterprise application development is a long way from dead. But maybe the old way of doing it is.

Wednesday, December 12, 2012

Links for 12-12-2012

Dawn at Mesquite Dunes, Death Valley

Got out to Death Valley prior to Amazon re:Invent.

Tuesday, December 11, 2012

Links for 12-11-2012

Tuesday, December 04, 2012

Links for 12-04-2012 (Long overdue, thanks to travel)

Monday, December 03, 2012

My appearance on CloudExpo West CTO Power Panel



That snap from the video is sort of scary I admit. Nonetheless, you might like the overall CTO Power Panel that I appeared on during CloudExpo West a few weeks ago.

Tuesday, November 20, 2012

Speed Graphic photography at the Head of the Charles Regatta

I thought this was a cool contrast with all the long lens DSLRs (including mine) out there.

Red Hat OpenStack "Folsom" version now available

The story of cloud computing has been one of open source-fueled innovation—often directly driven by end users with a need. OpenStack is no exception. The broad community developing and supporting OpenStack includes end-user organizations that have demanding IT requirements. Red Hat is proud to play its own role in delivering innovation to the many open source communities with which we're involved, including OpenStack. We also have a vision to make that innovation consumable by our customers. We now have a Technology Preview of OpenStack ("Folsom" version) available.

Read the rest of my blog post on the Red Hat press site.

Links for 11-20-2012

Monday, November 19, 2012

Links for 11-19-2012

Thursday, November 15, 2012

Links for 11-15-2012

Tuesday, November 13, 2012

Links for 11-13-2012

Thursday, November 08, 2012

Links for 11-08-2012

Friday, November 02, 2012

Links for 11-02-2012

The five stages of BYOD

The "Five Stages of Grief" (aka the K├╝bler-Ross model) as applied to Bring-your-own-Device.
Denial. It's just a passing fad. Or maybe we'll just get rid of those damned entitled Millennials who think they can bring their iPads into work. They'll learn soon enough how things work in the real world. Well at least we don't actually have to let them on the corporate network. Right? Right???
Anger. Don't people know IT tells them what devices and software they can and can't use. I'm sending out an email reminding everyone just who is in charge about this and they'd better shape up or else! WTF? Is that our CEO tapping away on a tablet over there?
Bargaining. OK, everyone. I get that you say these things make you more productive and all that. Tell you what. Let me load up a bunch of special monitoring and control software on those devices you bought yourself and we can all be friends again--just as soon as you read and sign this 50 page contract documenting the rules you'll need to follow.
Depression. I've lost control. I can't do my job. There's going to be a security breach and I'm going to be blamed. Nobody understands that IT has responsibilities for our company data and our customer data.
Acceptance. Maybe this isn't so bad. Most of our employees are actually pretty reasonable about taking measured steps like using VPNs and setting a password once I explained why it's so important. In fact, they're even OK with installing profiles that enforce some of those rules. And they get that I can't offer official support for stuff they buy on their own. I wonder if I can start getting out of supporting PCs too?

Thursday, November 01, 2012

The other hybrid: Community clouds

Community clouds were included in the original NIST definition of cloud computing, which has come to be seen as more or less the definitive taxonomy. NIST defined community clouds as cloud infrastructure "provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). However, as recently as a couple of years ago, it remained something of a theoretical construct--an intriguing possibility with only limited evidence to suggest it would actually happen anytime soon.  

That's changed.

It's not that community clouds are everywhere, but we now see concrete commercial examples in pretty much the places where you'd expect. Where there are specific rules and regulations that have to be adhered to and where there are entities that can step up to some sort of supervisory or overseeing role. 

Unsurprisingly, the federal government is one of the most fertile grounds for the community cloud idea. Government, well, "thrives" may not be quite the right word. But certainly government procurement is rife with a veritable alphabet soup of rules, standards, and regulations that must be adhered to. Indeed, government procurement was one of the driving forces behind the aforementioned NIST definition in the first place. And, in many cases, the policies and process associated with these rules have relatively little overlap with how businesses operate outside of the government sphere.

GovernmentCloudFirstMandate_Thumb

Furthermore, government agencies aren't wholly independent entities. They've often acted as if they were to be sure. And one of the big issues with government IT costs historically is that purchases often get made project-by-project, agency-by-agency. That said, initiatives like the 2010 Cloud First Mandate have the federal government towards more centralized and shared IT functions. The Cloud First Mandate may not have progressed as quickly as then-US CIO Vivek Kundra initially intended. Nonetheless, it's helped push things along in that direction. (As, no doubt, budget pressures have overall.)

The result is that many agencies are rapidly moving towards a cloud computing model--often using a hybrid approach that bridges internal resources with external GSA providers. I discuss one such agency in a session at the Cloud Computing Bootcamp in Santa Clara next week.

One public cloud specifically catering to the federal government is Amazon with their GovCloud which:

is an AWS Region designed to allow US government agencies and customers to move more sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. The AWS GovCloud (US) framework adheres to U.S. International Traffic in Arms Regulations (ITAR) requirements. Workloads that are appropriate for the AWS GovCloud (US) region include all categories of Controlled Unclassified Information (CUI), including ITAR, as well as Government oriented publically available data. Because AWS GovCloud is physically and logically accessible by US persons only, and also supports FIPS 140-2 compliant end points, customers can manage more heavily regulated data in AWS while remaining compliant with federal requirements. In other respects the GovCloud Region offers the same high level of security as other AWS Regions and supports existing AWS security controls and certifications such as FISMA, SSAE 16/SOC1 (formerly SAS-70 Type 2), ISO/IEC 27001, and PCI DSS Level 1. AWS also provides an environment that enables customers to comply with HIPAA regulations. (See the AWS Security page for more details.) The customer community utilizing AWS GovCloud (US) includes U.S. Federal, State, and Local Government organizations as well as U.S. Corporate and Educational entities.

As discussed by Brandon Butler in Network World, however, community clouds aren't limited to government. 

Given the data privacy standards imposed by the HIPAA regulation, healthcare providers also have some specific requirements and concerns when it comes to cloud computing--or, really, IT in general. Nor are these concerns purely academic. In 2011, the Department of Health and Human Services fined two different organizations a total of $5.3 million for data breaches even though those breaches were, arguably, relatively minor.

Optum, the technology division of the UnitedHealth Group, is an example of a healthcare community cloud from the Network World article. Butler writes that:

[Optum] released its Optum Health Cloud in February as a way for those in the healthcare industry to take advantage of cloud resources. Strict data protection standards regulated by HIPAA, plus a constant pressure to reduce costs and find efficiencies in healthcare management has made community cloud services seem like a natural fit for the industry, says Ted Hoy, senior vice president and general manager of Optum Cloud Solutions. Powered by two data centers owned by Optum, Hoy hopes the community cloud will eventually be able to offer Iaas, SaaS, PaaS for customers.

The service, Hoy says, has differentiating features tailored specifically for the healthcare industry. HIPAA regulations, for example, regulate how secure certain information must be depending on what it is. An e-mail exchange between two doctors about the latest in medical trends needs a different level of protection compared to a communication between a doctor and a patient. Optum worked with Cisco to create security provisions tailor-made for the system that identifies who is entering information, what type of information it is and who has access to it.

 It's still early days for community clouds and it's reasonable to question the degree to which they'll expand beyond fairly specific (and relatively obvious) uses such as we're mostly seeing to date. At another level though, I see this as another example of how it's hard to call exactly where workloads are going to end up running. Which is why industry analysts such as Gartner are making such a big deal about concepts such as Hybrid IT.

Links for 11-01-2012

Wednesday, October 31, 2012

Links for 10-31-2012

How application virtualization was reborn

Server virtualization has become a familiar fixture of the IT landscape and an important foundation for cloud computing.

But virtualization is also relevant to client devices, such as PCs. To a greater degree than on servers, client virtualization takes many forms, reflecting forms of abstraction and management that take place in many different places. Client virtualization includes well-established ways of separating the interaction with an application from the application itself, the leveraging of server virtualization to deliver complete desktops over the network (Virtual Desktop Infrastructure--VDI), and the use of hypervisors on the clients themselves. In short, client virtualization covers a lot of ground, but it’s all about delivering applications to users and managing those applications on client devices.

2012-ipadmini-home-hero

It’s essentially a tool to deal with installing, updating, and securing software on distributed “stateful” clients—which is to say, devices that store a unique pattern of bits locally. If a stateless device like a terminal breaks, you can just unplug it and swap in a new one. Not so with a PC. At a minimum, you need to restore the local pattern of bits from a backup.

However, client virtualization (in any of its forms) has never truly gone mainstream, whether it was because it often cost more than advertised or just didn’t work all that well. It’s mostly played in relative niches where some particular benefit—such as centralized security—is an overriding concern. These can be important markets. We see increased interest in VDI at government agencies, for instance. But we're not talking about the typical corporate desktop or consumer. 

Furthermore, today, we access more and more applications through browsers rather than applications installed on PCs. This effectively makes PCs more like stateless thin clients. And, therefore, it makes client virtualization something of a solution for yesterday’s problems rather than today’s.

Except for one thing.

Client virtualization, in its application virtualization guise, has in fact become prevalent. Just go to an Android or iOS app store.

Application virtualization has been around for a long time. Arguably, its roots go back to WinFrame, a multi-user version of Microsoft Windows NT that Citrix introduced in 1995. It was, in large part, a response to the rise of the PC, which replaced “dumb terminals” acting as displays and keyboards for applications running in a data center with more intelligent and independent devices. Historically, application virtualization (before it was called that) focused on what can be thought of as presentation-layer virtualization—separating the display of an application from where it ran. It was mostly used to provide standardized and centralized access to corporate applications.

As laptops became more common, application virtualization changed as well. It became a way to stream applications down to the client and enable them to run even when the client was no longer connected to the network. Application virtualization thus became something of a packaging and distribution technology. One such company working on this evolution of application virtualization was Softricity, subsequently purchased by Microsoft in 2006.

I was reminded of Softricity earlier this year when I spoke with David Greschler, one of its co-founders, at a cloud computing event. He’d moved on from Microsoft to PaperShare but we got to talking about how the market for application virtualization, as initially conceived, had (mostly not) developed. And that’s when he observed the functional relationship between an app store and application virtualization. And how application virtualization had, in a sense, gone mainstream as part of mobile device ecosystems.

If you think about it, the app store model is not the necessary and inevitable way to deliver applications to smartphones, tablets, and other client devices.

In fact, it runs rather counter to the prevailing pattern on PCs—regardless of operating system—towards installing fewer unique applications and running more Web applications through the browser. Google even debuted Crome OS, designed to work exclusively with Web applications, to great fanfare. As connecting to networks in more places with better performance improves and as standards, such as HTML5, evolve to better handle unconnected situations, it’s a reasonable expectation that this trend will continue.

But the reality of Chrome OS has been that, after early-on geek excitement, it’s so far pretty much hit the ground with a resounding thud. At least as of 2012, it’s one thing to say that we install fewer apps on our PCs. It’s another thing to use a PC that can’t install any apps. Full stop.

What’s more, it’s worth thinking about why we might prefer to run applications through a browser rather than natively.
It’s not so much that it lets developers write one application and run it on pretty much anything that comes with a browser. As users, we don’t care about making life easier for developers except insofar as it means we have more applications to use and play with. And, especially given that client devices have coalesced around a modest number of ecosystems, developers have mostly accepted that they just have to deal with that (relatively limited) diversity.

Nor is it really that we’d like to be able to use smaller, lighter, and thinner clients. Oh, we do want those things—at least up to a point. But they’re usually not the limiting factor in being able to run applications locally and natively. We don't want to make clients too limited anyway; computer cycles and storage tend to be cheaper on the client than on the server.

No, the main thing that we have against native applications on a client is their “care and feeding.” The need to install updates from all sorts of different sources and dealing with the problems if upgrades don’t go as planned. The observation that a PC’s software sometimes needs to be refreshed from the ground-up to deal with accumulating “bit rot” as added applications and services slow things down over time.

And that’s where centralized stores for packaged applications come in. Such stores don’t eliminate software bugs, of course. Nor do they eliminate applications that get broken through a new upgrade—one need only peruse the reviews in the Apple App Store to find numerous examples. However, relative to PCs, keeping smartphones and tablets up-to-date and backed up is a much easier, more intuitive, and less error-prone process.

Of course, for a vendor like Apple that wants to control the end-to-end user experience, an app store has the additional advantage of maintaining full control of the customer relationship. But the dichotomy between an open Web and a centralized app store isn’t just an Apple story. App stores have widely become the default model for delivering software to new types of client devices and certainly the primary path for selling that software.

The Web apps versus native apps (and, by implication, app stores) debate will be an ongoing one. And it doesn’t lend itself to answers that are simple either in terms of technology or in terms of device and developer ecosystems.
Witness the September 2012 dustup over comments made by Facebook CEO Mark Zuckerberg that appeared to diss his company’s HTML5 Web app, calling it "one of the biggest mistakes if not the biggest strategic mistake that we made."

However, as CNET’s Stephen Shankland wrote at the time: “Those are powerfully damning words, and many developers will likely take them to heart given Facebook's cred in the programming world. But there are subtleties here -- not an easy thing for those who see the world in black and white to grasp, to be sure, but real nonetheless. Zuckerberg himself offered a huge pro-HTML5 caveat in the middle of his statement.”

It’s often observed that new concepts in technology are rarely truly new. Instead, they’re updates or reimaginings of past ideas both successful and not. This observation can certainly be overstated, but there's a lot of truth to it. And here we see it again--with application virtualization and the app store.

Tuesday, October 30, 2012

Bass Harbor Light

Bass Harbor Light by ghaff
Bass Harbor Light, a photo by ghaff on Flickr.

Another nice summer in Acadia National Park (on a couple different trips).

Links for 10-30-2012

Saturday, October 27, 2012

Head of the Charles 2012

Head of the Charles 2012 by ghaff
Head of the Charles 2012, a photo by ghaff on Flickr.

The Head of the Charles last weekend was my first really heavy-duty use of the Sigma 150-500mm lens that replaced my old Sigma tele after its AF broke. When I sent that lens in for repair, I was offered a trade-in at a pretty good rate that it seemed silly to not take advantage of, even though it's a category of lens I don't use a huge amount.

I find the focus on the new lens is a lot more responsive than the old one (as well as having a top end of 500mm rather than 400mm). However, removing the lens limits also exposes the AF limitations of my EOS 5D a lot more. So it's got me leaning towards an upgrade to a 5D Mark iii rather than the not-yet-available 6D as I had been leaning towards previously.

Thursday, October 25, 2012

Links for 10-25-2012

Wednesday, October 17, 2012

Links for 10-17-2012

Monday, October 15, 2012

Links for 10-15-2012

Thursday, October 11, 2012

The inevitability of cloud computing

Thanks to a pointer from Joe McKendrick over at Forbes, this morning I had a chance to read a study looking at 2012 cloud adoption patterns (mostly at larger organizations) put together by Navint Partners. The bottom line? "While there’s still much debate over the Cloud’s security, the industry consensus is one of inevitability."

The study looked at both private and public cloud deployments although it's a bit hard to tease apart conclusions as to when they relate to on-premise versus hosted offerings--or a hybrid combination of the two. I've come to somewhat wistfully think back to a 2009 CNET Blog Network piece I wrote about cloud terminology and sorta wish that we, as an industry, had come up with a better way to unpack the different concepts and approaches that come together under the "cloud computing" umbrella. But I digress.

Among the study's findings was that 80 percent of respondents recognized cloud technology as giving their organizations a competitive advantage.

The report goes on to note that:

Cloud’s scalable nature and modern approach to data and infrastructure pushes organizations into a more competitive position. While most CIOs recognize the Cloud has existed in some form for a decade, SaaS solutions are, in many industries, still novel. [Navint's Robert] Summers explained that while larger corporations have been using private clouds for a while, small‐to‐mid sized businesses can dramatically scale their operations and outpace competitors if some processes are relegated to a SaaS or Cloud model.

This is consistent with what we've been seeing at Red Hat with early cloud deployments. The ultimate goal from a CxO's perspective is to use cloud computing in order to make technology a competitive differentiator rather than a keep-the-lights-on cost. This goal only becomes more important ads technology is increasingly core to how more and more businesses operate. 

What form cloud takes will depend on the company. For smaller organizations, SaaS will likely play an outsized role.

But, as noted by Gartner's Eric Knipp in a recent blog post "While I don’t debate that 'the business' will have more 'packages' to choose from (loosely referring to packages as both traditional deployed solutions and cloud-sourced SaaS), I also believe that enterprises will be developing more applications themselves than ever before." He goes on to describe why he believes that a golden age of enterprise application development is upon up, partly because of the rise of Platform-as-a-Service. I'll discuss Knipp's thesis in more detail in a future post.

On the downside, the study also found that:

survey respondents still ranked security as the top concern (above compliance and integrity), and affirmed data security and privacy as the number one barrier to both public and private cloud adoption. Despite highly advanced security and fraud countermeasures employed by Cloud vendors, CIOs and other executives regard security guarantees and redundancy policies with guarded pessimism. Practically, this fear has had the effect that many companies have yet to move “mission‐critical” applications to the cloud.

 I guess I'm not really surprised by this finding either. One wonders to what degree this is about perceptions, rather than reality. But, at some level, the distinction isn't that important if it's what potential customers believe.

The good news from my perspective is that I see a lot of good work happening out in the industry to bring structure to security (and compliance/governance/regulatory/etc.) discussions and really bringing together the tools to have discussions that transcend naive safe/not-safe dichotomies. I've got an upcoming piece that looks into the good stuff the Cloud Security Alliance (CSA) is doing in this space.

Finally, it's clear that cloud computing isn't going to be about private or public.

36% of survey participants believe that budget dollars for public cloud computing will increase by as
much as twenty percent by 2014, and 46% expect budgets for private cloud computing to jump by more than twenty percent over the same period.  

Which is why we're focused on open and hybrid at Red Hat.

Links for 10-11-2012

Friday, September 28, 2012

Links for 09-28-2012

Wednesday, September 26, 2012

Links for 09-26-2012

Wednesday, September 19, 2012

Links for 09-19-2012

Tuesday, September 18, 2012

Podcast: Cloud Evangelist Chat: Talking Intel Developer Form

The Intel Developer Form, held the week of September 10 in San Francisco, is always a good opportunity to reflect on what's happening with hardware. That's what my fellow Red Hat Cloud Evangelist Richard Morrell and I do in this podcast. Did the cloud make it possible to keep delivering chip performance improvements? Did the cloud fundamentally cause the ongoing disruption in the client space from iPads to Android to ARM?

You can check out Richard's blog at cloudevangelist.org.

Listen to MP3 (0:24:46)
Listen to OGG (0:24:46)

Thursday, September 13, 2012

Links for 09-13-2012

You can't just say no

This post by James Staten is a bit of an ad for some detailed Forrester reports, but in nonetheless offers solid succinct advice about how most organizations should approach a cloud use policy.

It's too late for your policy to say, "The use of cloud services is not allowed," so you need to start from an assumption that it is already happening — and that more of it is happening behind your back than in front of your nose. In fact, any policy that takes a draconianly negative tone probably won't go over very well (it might just be blatantly ignored).

A better approach is to actually encourage its use — in the right way. Your cloud policy needs to present IT as an assistant to the business in the use of cloud and as an advocate for cloud. This will ensure that IT isn't seen as the internal police that you need to hide your business-driven cloud use from. Because your policy should help bring cloud use into the light where it can be monitored, managed, and made better.

As Red Hat's CIO Lee Congdon put it in a webinar I did with him back in March: "Moving to cloud? Your business may have already beaten you to it." with "websites, social media presence, customer service, and CRM." 

The situation is similar to (and, in many respects, related to) Bring-Your-Own-Device. When I write about BYOD, I invariably get comments to the effect that it's a passing fad waiting for a disaster to strike and for IT to subsequently clamp down. This reader's response is fairly typical of such an attitude: 

Eventually when many of the younger crowd starts to understand why they can't find work, they will realize that employers call the shots. The BOYD trend was started by a small group of people who thought their devices manufacturer (I'll give you 3 guesses who the manufacture was, and the first two don't count) is so superior to other devices that they refused to work on anything else. I would happily wish those people well finding employment elsewhere and call for the next interviewee.

As Staten correctly notes, in most environments trying to roll back the clock will merely drive usage underground and beyond the ken of IT governance and policy--to say nothing of cutting off IT and line of business users from the genuine benefits of public cloud services.

The reality of cloud usage (in its various forms) is one reason why many users with whom I speak are intensely interested in topics such as hybrid clouds and application/portability. They realize that cloud is happening and they don't want to stop it. But they do want to bring it under an integrated management and policy framework that empowers users while protecting the company.

And this is why at Red Hat, everything we're doing in cloud from Red Hat CloudForms, to OpenShift to OpenStack to our Certified Cloud Provider Program is built around the twin concepts of open and hybrid.

Wednesday, September 12, 2012

Links for 09-12-2012

Tuesday, September 11, 2012

I talk cloud on Silicon Angle's theCube

This video was shot back in June at Forecast 2012.

Hotel del Coronado LinuxCon event


Hotel del Coronado LinuxCon event
Originally uploaded by ghaff

This was taken at a Hawaii-themed party at LinuxCon/CloudOpen a couple of weeks back. The whole conference was fun and it was great to catch up with various folks.

Brute force computing doesn't replace models

Writing in The New York Times' Bits blog, Quentin Hardy notes that:

The brute force computing model is changing a lot of fields, with more to follow. It makes sense, in a world where more data is available than ever before, and even more is coming online, from a connected, sensor-laden world where data storage and server computing cycles cost almost nothing. In a sense, it is becoming a modification of the old “theorize-model-test-conclude” scientific method. Now the condition is to create and collect a lot of data, look for patterns amid the trash, and try to exploit the best ones.

I rather like the term "brute force computing."

On the one hand, it generalizes beyond Big Data to Big Compute as well. The common thread is that bits of storage and cycles of computing are cheap enough that they don't need to be applied judiciously. The article offers an example from Autodesk. "The idea is to try thousands of different conditions, like temperature, humidity, tensile strength or shape, in just a few seconds. Most of the outcomes will be lousy, a couple of them will probably affirm what a designer thought to begin with, and a few might deliver surprising insights no one had considered."

In another respect, "brute force computing" is a narrower term than Big Data, which really talks to the speed and size of the data rather than the sophistication applied to its analysis. The application of sophisticated models to large realtime data streams may fall under Big Data--but it would be hard to call such merely brute force. That there's such demand for data scientist skills is but one indicator that there's a lot more to data analytics than having a big server farm. Rather, the idea that useful results can fall out when lots of CPUs crank on lots of bytes is more akin to an idea that Wired's Chris Anderson popularized in his provocative 2008 article "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete."

And that's where I'd have liked to see a bit more counterpoint in Hardy's article. It's not that lots of compute plus lots of data can't yield interesting results. But as repeatedly discussed at conferences such as O'Reilly's Stata, it isn't that simple. The numbers often don't just speak for themselves. The right questions have to be asked and the right models, however refined and tested by data and compute, developed. "Brute force computing" has a place but it's got an even larger place when augmented with intelligence.

Links for 09-11-2012

Monday, September 10, 2012

Links for 09-10-2012