Software-as-a-Service (SaaS) is transforming the relationship between software providers and their customers. Vendors and customers are increasingly designing and building software products and services more collaboratively—even mission-critical business applications like ERP. That would have been unthinkable a few years ago. Of course, a software company’s biggest customers have always had some say in requesting new features and changes.
In the past, we were concerned with recovering application and data within the confines of the physical data center. Disaster recovery focused on how to rebuild the existing infrastructure of the hardware, software, and applications at a location apart from a compromised operations site. In other words, we needed to have a box to replicate the box.
In today’s world of virtualization and public and private clouds, there are more options than ever for infrastructure and operations teams. On the one hand, this degree of flexibility and choice gives IT professionals many more tools with which to build networks and address challenges. On the other hand, it can also lead to confusion with respect to when and where to use these options to best effect.
As an IT veteran, I have observed and helped drive tremendous change, not just in the technologies we use but how we use them. Among the more dramatic shifts has been the definition of being a mobile worker. When I was a mainframe programmer in the late 80’s, being “highly mobile” meant I could log in at any mainframe terminal in the office, do my programming, submit my jobs to the queue and do my (internal only) email.
One of the newest buzz words in IT organizations is “DevOps.” The principles of DevOps are contrary to how IT has traditionally managed software development and deployment. So, why are more and more organizations looking to DevOps to help them deliver IT services to customers better, cheaper and faster? But…what exactly is DevOps anyway? It is not a job or a tool or a market segment; it’s best defined as a methodology, or an approach.
Many of VMware’s customers have deployed private or hybrid clouds and automated the delivery of Infrastructure as a Service. While this accelerates the delivery of individual machines from days to hours and minutes, it only addresses about 10% of the overall business agility problem. Someone has to take those infrastructure components and […]
The market here is very young, with software-centric solutions from startups like Nutanix and Simplivity competing with EMC, Cisco and Hewlett Packard Enterprise, which just released Synergy, a hyper-converged “composable infrastructure” box that treats that infrastructure as code.
While “boring” is a good thing as far as data center operators are concerned, 2015 was not a boring year for the data center industry. From data center outages on both sides of the Atlantic, caused by lightning and an explosion, to one of the biggest data center providers thinking of offloading its data centers, to a big change to the way the industry’s most important reliability rating system works, the year delivered plenty of excitement for some and anxiety for others.
According to IDC’s latest assessment of the market, cloud IT infrastructure grew by 25 percent year-over-year, compared to traditional IT infrastructure, which only increased by six percent over the same period. In fact, nearly one third of all IT infrastructure spending is made on cloud technologies. Clearly, the cloud conversation is no longer a conversation about tomorrow.
One of the most fun moments in recent months was listening in on a briefing about exactly how SimpliVity’s storage works and being able to audibly hear “click” when Mr. Marks fully understood how it all worked. If only all tech journalists and analysts had that time and talent to invest.
To examine the problem further, let’s compare the old with the new.