Posted on Leave a comment

Vertiv Develops Solutions to Edge Computing Trends

Website Hosting Review Interview with John Hewitt, President of the Americas, Vertiv

By Contributing Editor, Kathy Xu

John Hewitt is President of the Americas at Columbus, Ohio based Vertiv. Vertiv designs, builds, and services critical infrastructure that enables vital applications. Its portfolio is comprised of power, cooling and IT infrastructure solutions and services, extending from the cloud to the edge of the network. Hewitt has been with Vertiv since October 2017, and oversees operations and business development in the United States, Latin America, and Canada.

Website Hosting Review recently had the opportunity to speak with Hewitt in preparation for Edge Congress 2018 in Austin, Texas.

Website Hosting Review, Kathy Xu (DCP-KX) Question:  Tell our readers about your company: What do you do and what problems are you attempting to solve?  

Vertiv, John Hewitt (V-JH) Answer: Vertiv brings together hardware, software, analytics and ongoing services to ensure our customers’ vital applications run continuously, perform optimally and grow with their business needs. Vertiv solves the most important challenges facing today’s data centers, communication networks and commercial and industrial facilities with a portfolio of power, cooling and IT infrastructure solutions and services that extends from the cloud to the edge of the network.

DCP-XX Q: We understand that edge is a popular topic in computing today. What does the edge mean to Vertiv?

V-JH A: We have been hearing about edge computing for years. What that meant in the past was typically better described as distributed computing. Now edge has expanded far beyond distributed models to encompass any method of computing that is moved outside a traditional data center to be closer to the end consumer of the data. Vertiv recently researched this topic and found more than 100 common edge use cases. Based on similar features of the data sets and compute requirements, we identified archetypes that will help businesses better understand how to address challenges such as latency and data capacity, and learn how to future-proof their infrastructure. Some of the fastest growing use cases include HD content distribution, autonomous cars, smart cities and buildings, digital health and augmented reality.

DCP-XX Q:  Let’s take look at Vertiv’s solutions. How has the market been adopting Vertiv? Are there any surprising ways that companies are using your services that are unique?

V-JH A: Our customers often surprise us with unique needs, and we are happy to rise to the challenge. Some recent edge cases include fish farms looking for new ways to apply technology for more sustainable operations – monitoring fish health and managing the feeding schedules by applying sensors, monitors, servers and communications technology. 5G is revolutionary for the communications industry, and telecom companies are finding ways to efficiently update the infrastructure supporting cell sites and other connections, enabling a transition to 5G technology while continuing to support their current equipment.

DCP-XX Q: Let’s look at trends as it relates to retail. What major trends and technologies do you see as transformational to the edge space and how Vertiv plays a role?

V-JH A: Retail is racing to transform all aspects of their business to optimize customer experience, from in-store to online to warehouse. Some brands are deploying unique experiences in brick and mortar stores – such as smart mirrors that allow shoppers to browse for other sizes and accessories in the fitting room, and augmented reality apps to navigate the store space. Warehouses are deploying robots and digital tools that reduce cost and provide improved access to data that enables a faster and more reliable experience. Online availability remains a focus, with more businesses adding or growing their online presence.

DCP-XX Q: Smart cities is a hot topic today. Is there a connection between edge computing and Smart Cities and infrastructure?

V-JH A: Another segment making a huge impact on edge computing is Smart Cities. More cities are moving toward using data collection and sensors to intelligently manage resources efficiently, changing how we live and work, and changing the landscape of edge computing.

In terms of technology, the movement is toward scalable, intelligent infrastructure to support all of these transformational applications. An agile infrastructure allows for rapid deployment and change, and intelligence allows distributed architectures to be centrally monitored and managed.

DCP-XX Q:  Let’s take a look at future possibilities for Vertiv. What new developments and initiatives can we anticipate from Vertiv as we head toward 2019?

V-JH A: Vertiv will be sharing a new cloud-based monitoring platform that empowers customers with deeper insights across the data center and remote sites, to identify and predict problems before they occur. This allows businesses to optimize physical and human resources while securely accessing the information needed to meet SLAs.

You can also expect to see more integrated edge infrastructure solutions that meet customer challenges of fast deployment, scalability, availability, security and efficiency.

DCP-XX Q: Edge will be a major topic at this event.  What do you look forward to learning and discovering at Edge Congress?

V-JH A: Vertiv partners with many of the industry leaders contributing to the conference and attending the sessions. I’m looking forward to listening to their ideas and hearing about what excites them about the edge. We have a lot to learn from each other, and I’m excited about collaborating on ways to surprise our customers with innovations and solutions to meet current challenges and to ease the path for future changes.

DCP-KX Q: Thank you for your time, John. We are looking forward to Edge Congress 2018.

Posted on Leave a comment

ZenFi Networks Leverages Sitetracker Platform to Boost Business Growth

Originally published to TelecomNewsroom.

Sitetracker, the project and asset management standard for infrastructure owners and developers, recently announced their collaboration with ZenFi Networks, an innovative, locally-owned and operated communications infrastructure company serving the New York and New Jersey metro region. ZenFi Networks will be implementing the Sitetracker Platform to effectively manage multiple teams designing, building and maintaining high-volume small cell and fiber assets.

ZenFi Networks will leverage Sitetracker’s utilities to complement their own accommodating support services, nimble problem solving and deep industry experience. Characteristics like these allow the company to quickly overcome hurdles and navigate challenging construction logistics to meet the ever-growing network access demands of their customers.


To read the full blog, please click here.


Posted on Leave a comment

eX² Technology Provisions the Future of Communications Infrastructure

Website Hosting Review Interview with Misty Stine, Executive Vice President of Business Development, eX² Technology

By Contributing Editor, Kathy Xu

Misty Stine is the Executive Vice President of Business Development at eX² Technology, a company that specializes in designing, installing and maintaining robust broadband, intelligent transportation and critical infrastructure networks for government agencies, consortiums and public-private partnerships (P3). Stine has more than 30 years of experience in the communications and critical infrastructure security industries and is a founding member of eX² Technology. She uses her deep industry knowledge to drive marketing strategy, generate new business and develop relationships with customers, businesses, vendors and suppliers. Prior to forming eX², Stine served as the Vice President of Business Development for G4S Technology/Adesta where she held senior management responsibility for the company’s strategic corporate initiatives, capturing business within the Energy, Transportation, Communications, Commercial and Government market sectors. She has an accomplished record of capturing more than $1 billion in new business and is an active member of INCOMPAS, ITS America, Fiber Broadband Association, SHLB, IBTTA and UTC.

Website Hosting Review recently sat down with Stine to discuss the future of telecommunications and how The 2018 INCOMPAS show facilitates industry success.

Website Hosting Review, Kathy Xu (DCP-KX) Question: Tell our readers about your company: What do you do and what problems are you attempting to solve?

eX² Technology, Misty Stine (ET-MS) Answer: eX² Technology is a single source solution for those seeking to build, scale or upgrade their communications infrastructure.

Our industry is seeing record fiber deployment as the foundation for numerous service delivery mechanisms. This demand-based broadband growth is outpacing the availability of federal and state aid as well as capital funding by existing providers, making it difficult for the industry to keep up with demand. We feel there is a unique opportunity to deliver value and achieve a win-win-win scenario for multi-tiered segments of the industry.

eX2 brings focus to opportunities with a total value perspective and aligns customer engagements from a multi-stakeholder viewpoint. We bring in multiple parties and work together to achieve each party’s goals and address each of their business needs. Examples include bringing in partners for joint builds, overbuilding and marketing excess dark fiber capacity. This helps offset upfront capital costs through shared infrastructure maintenance costs, public-private partnerships and multi-agency contracts. When every party involved can maximize their ROI, it leads to mutually successful results.

DCP-KX Q: What are you hoping to achieve at The 2018 INCOMPAS Show?

ET-MS A: The INCOMPAS Show is one of the very few places with the capacity to bring many of the stakeholders and senior decision makers together in-person to discuss and develop these win-win-win business models. There is an abundance of parties looking to build, buy, lease, and invest in network infrastructure in one place. We hope to take advantage of the availability of these stakeholders to nurture deployment projects that provide value to each involved party.

DCP-KX Q: How does your company keep up with the demands of the quickly growing and ever-changing telecommunications industry?

ET-MS A: We continually research and test new methods, and we constantly evaluate new technologies to streamline efficiency and improve the end-user experience. We’ve invested heavily into GIS-based tools and systems and refined our engineering processes to take advantage of existing practices while integrating new products, procedures and construction methods.

DCP-KX Q: What do you think is the greatest challenge in the network infrastructure space?

ET-MS A: There is a noticeable talent gap in our industry. On one end of the spectrum are experienced telecom infrastructure professionals with decades of experience. On the other end is a younger, more inexperienced workforce with very few seasoned professionals in between. In an industry with mounting demand for more infrastructure, we need to rapidly identify, develop and train the inexperienced workforce while they can still benefit from the lessons learned by their experienced, lifelong communications professional counterparts. If we have a robust talent base, it helps the industry keep up with the demands for increased bandwidth for the IoT, smart applications and ecosystems, 5G&6G deployments, small cell and beyond.

DCP-KX Q: Is your company looking at Artificial Intelligence (AI) solutions and if so, how do you see AI impacting your business

ET-MS A: No, we are not looking at Artificial Intelligence solutions. However,eX² remains constantly vigilant of new technologies and how they may impact our business.

DCP-KX: Thank you for your time, Misty. To learn more about eX, visit, or get in touch with Misty at

Posted on Leave a comment

Why the IoT Needs Out-of-Band Management

Vertiv Develops Solutions to Edge Computing Trends 1

By Marcus Doran, Vice President and General Manager, Rahi Systems

The Internet of Things (IoT) promises to revolutionize entire industries through greater efficiency, enhanced customer service and improved decision-making. Forward-thinking organizations are also developing new products, services and business models that leverage internet-connected devices that gather data and operate autonomously.

Imagine retail store shelves that send an alert to the warehouse when they need to be replenished. Smart buildings that automatically control lighting and temperature based upon room occupancy and ambient conditions. Fleet vehicles that calculate the best route, and machinery that schedules its own maintenance.

But for all its promise, the IoT has introduced some confounding problems. Organizations are finding that a centralized IT environment creates unacceptable latency when it comes to analyzing IoT data. As a result, more and more computing resources are being pushed to the network edge, closer to users and IoT devices. This creates a far-flung network infrastructure that’s extremely difficult for IT teams to manage.

The latest out-of-band management solutions address this challenge. Originally designed to give IT personnel access to devices when the primary (in-band) network is unavailable, out-of-band management has evolved to support advanced infrastructure management and automation tools. Out-of-band management also gives IT personnel access to equipment that has not yet been configured, enabling zero-touch provisioning of IoT devices and edge systems.

Nodegrid Bold SR from ZPE Systems is an example of a compact form-factor appliance that provides resilient out-of-band management for the edge and IoT. Secure access and control of IT and IoT devices through this kind of appliance enables a virtual IT presence at the edge of the network, so IT support staff can manage devices remotely, simplifying administration and troubleshooting while reducing staffing and travel costs.

The software-defined networking (SDN) capabilities of out-of-band management appliances provide a centralized view of infrastructure assets, and allow for automation and policy-based orchestration of network services and applications. Out-of-band solutions have also evolved to protect networks with enterprise-grade security protocols and encrypted data transit.

Nodegrid Bold SR leverages network functions virtualization (NFV) to provide routing, firewall, system monitoring and VPN capabilities. NFV is an important capability for out-of-band management because it saves physical space and cuts hardware costs through the virtualization of physical network appliances. It also speeds deployment because there’s less hardware to install, and provides operational cost savings through reduced power consumption and cooling.

Remote device management appliances also need redundant connectivity methods with automatic failover to reduce downtime. Appliances should support a variety of connection methods, including serial, network and USB, and ideally provide in-band and out-of-band remote access and power controls. Quality out-of-band management appliances also offer device health monitoring, alert notifications and actionable data capabilities.

The IoT is compelling organizations to implement micro-environments at the network edge to provide compute and storage resources for a wide range of devices. IT teams need a new set of tools that enable remote monitoring and management of these edge environments. Remote management appliances like Nodegrid Bold SR address this need by extending out-of-band management capabilities to the IoT, and providing reliable, secure and efficient connectivity for edge networks.

About the Author

Marcus Doran – VP & GM at Rahi Systems Europe

Marcus Doran is an experienced data center infrastructure sales professional with 20 years’ experience in sales growth, revenue generation and new business development. He joined Rahi Systems in April 2016. Through his two-decade career, Marcus has worked all over Ireland, the Middle East and the UK as a Sales Manager, a Channel Manager and a Major Account Manager.

Marcus Twitter Handle: @marcusdoran

Rahi Systems Twitter Handle: @Rahi_Systems

Rahi Systems LinkedIn Company Page:

Posted on Leave a comment

Birmingham, Alabama, a Key Edge Growth Market, Is About to Ramp Up Its Participation in the Digital Economy

Originally posted to Telecom Newsroom

Edge data centers remain a hot category in data center site selection, which will continue into 2018 and beyond, according to a recent report MarketsandMarkets. Consumer demand for high bandwidth services and business demand for high capacity needs such as the Internet of Things (IoT) are just some of the primary drivers impacting demand for edge data centers. Other industries that see the benefit to edge data centers include retail, manufacturing, healthcare and financial firms, especially in underserved markets. The research firm projects that the edge data center market is expected to surge from $1.5 billion one year ago to $6.7 billion in 2022, at a compound annual growth rate of 35 percent.

Just recently, DC BLOX announced the development of a new data center facility in Birmingham, Alabama, a currently undeserved market with enormous growth potential. The Atlanta-based provider of data center, network and cloud services at the edge plans to use the 27-acre, former Trinity Steel site in downtown Birmingham to develop a technology and innovation campus that will drive the digital economy in Birmingham and across Alabama.  

“The significant investment being made by DC BLOX to open this data center in Birmingham will not only create high-paying jobs, but also bring an exciting new chapter to a neighborhood in the city with a long industrial history,” Alabama Governor Kay Ivey commented about the development. “We’re committed to positioning Alabama for a technology-focused future and look forward to working with the company to accelerate that process.”

To read the full blog, please click here.

Posted on Leave a comment

Edge Infrastructure and the Fourth Industrial Revolution

In a recent article that appeared in Light Reading, “Time for an Edge Computing Reckoning,” Senior Editor Mari Silbey addressed the subject of edge infrastructure, especially where it concerned near-futuristic applications such as smart cities, autonomous vehicles and drones.

There’s no question that these are exciting, potentially transformative instances of connected things which will change how we live, work and play. But as Silbey asks, how practical are they for mass-market adoption given the present limitations in infrastructure? And what are the smart killer apps that will encourage network operators to anticipate favorable cost-revenue projections, supporting new investment to make them a reality?

Today, there is constant pressure on networks to satisfy intensive traffic requirements with the lowest possible latency — pressures that will only increase exponentially with the arrival of the IoT into our homes, businesses and communities. Gartner predicts that more than 21 billion IoT devices will come online by 2020.

The stumbling block is that IoT applications require back-end computation to happen seamlessly with practically no latency. The disruptive power of industrial and consumer IoT applications such as robotic manufacturing, machine learning and augmented reality are all dependent on unprecedented amounts of data being transferred and processed near-instantaneously. Because it’s not going to be realistic to transmit data from billions of devices to the cloud, process it, and then relay it all back to the source, organizations will need to send the analytics to the cloud, and process data closer to the sensors at the edge.

Where and What Defines the Edge?

As EdgeConneX® Chief Innovation Officer, Phill Lawson-Shanks, has commented, “Consumers are accessing data via wireline and wireless infrastructure at ever increasing rates, with ever higher expectations. Consequently, bandwidth and latency issues are driving solutions to the edge.”

That said, in the Internet of Everything, where is this edge and how do we define it? As Silbey writes, “From a broader industry perspective, part of the issue with edge computing is also understanding where the demarcation point for the edge exists. Is it at a data center? A cable node? A small cell? A home gateway? A smartphone?”

According to Lawson-Shanks, “the edge is the transition point between where a service is offered and where it’s consumed. And by that accounting, the edge will move depending on what the service is.”

This definition reflects EdgeConneX’s unique approach to data center architecture. While other data center providers have approached the market with the mentality that if we build it they will come, as Lawson-Shanks has said, “We build a data center where people need it, build it quickly, and bring in an ecosystem to support it.”

One example of such an ecosystem is the EdgeConneX Edge Data Center® (EDC) in Buenos Aires, Argentina. EdgeConneX collaborated with a telecommunications service provider and an extensive network of ecosystem and peering partners to build a Buenos Aires EDC that will serve as a robust connectivity and peering platform, offering customers extensive fiber, density and peering options with wholesale economics. As a result, enterprises and end users in an underserved region in South America will now gain access to low-latency connectivity and content delivery, as well as advanced cloud and communications services that were previously unavailable.

Lawson-Shanks has also called for the development of new ecosystems around the “micro-edge.” As Silbey writes, “These would include groups of organizations that invest in and develop edge systems presumably through some kind of neutral and open platform. Traditional operators will continue to own most last-mile connectivity (at least for the foreseeable future), but others will step in to help build on top of those network links.”

But before our streets and highways see autonomous vehicles, Amazon delivers packages to our doors with the help of drones, and smart utilities serve our communities, there is much work to done.

Today, London, New York and Tokyo all are all pursuing smart city initiatives. But so are Tier II cities such as Denver and Portland, and Latin American locales including Buenos Aires and Rio de Janeiro. If the IoT promises an era of ubiquitous connectivity, the Fourth Industrial Revolution, it will be edge infrastructure, Edge Data Centers and supporting ecosystems that will make good on that promise.

Posted on Leave a comment

The Right Tools Make Data Center Operations Safer and More Efficient

Vertiv Develops Solutions to Edge Computing Trends 2

By Marcus Doran, Director of Sales, Europe at Rahi Systems

Data center operators are always looking for ways to save money and improve efficiency. Having the right tools can shave minutes off of common tasks, reducing costs and freeing up valuable staff resources. That’s why server lifts have an important role to play in today’s data center environment.

Servers and other IT equipment can be quite heavy. Servers typically weigh at least 75 to 100 pounds and some equipment can weigh up to 800 pounds. Maneuvering the gear into a rack or cabinet in a dense data center environment is tricky. It generally requires at least two people and 30 minutes of time to manually install equipment.

Even with multiple people lifting the equipment there is a high risk of injury. According to the Occupational Safety and Health Administration, manual handling of material is the primary cause of work-related injuries in the U.S., with 80 percent of those injuries affecting the lower back. The U.S. Department of Labor notes that back and shoulder injuries account for more than 36 percent of workdays missed due to injury. Given these statistics, it makes sense to invest in a mechanical device designed to lift heavy IT gear.

The server lifts have rugged steel frames in a compact design that can navigate narrow aisles. Large wheels easily roll over gates, door stops, etc., and won’t leave marks on raised flooring. Swivel handles enable smooth control, and a foot-operated dual-point stabilizer brake prevents movement during equipment installation.

The low-profile platform makes it easy to load equipment onto the lift and install the equipment in the bottom of a rack. The lifting mechanism raises equipment 8 feet or more to accommodate taller racks and cabinets. Some lifts have side-loading capabilities, enabling equipment to be installed while the unit is parallel to the rack.

Some data center racks now have slide rails with slots that align with nail heads on the server. Alignment is time-consuming and difficult, requiring that the server be held at a precise angle. Some products work in conjunction with the lifts to provide control over the angle of the server during installation and removal.

In today’s dynamic data center environment, equipment is frequently installed, moved and removed. ServerLIFT products can save time, reduce risk and improve the day-to-day operations of your data center. ServerLIFT offers a comprehensive line of powered and mechanical data center lifts and related accessories that make it easy to transport, position and install data center equipment.


About the Author

Marcus Doran, Director of Sales, Europe at Rahi Systems, is an experienced data center infrastructure sales professional with 20 years’ experience in sales growth, revenue generation and new business development. He joined Rahi Systems in April 2016. Throughout his two-decade career, Marcus has worked all over Ireland, the Middle East and the UK as a Sales Manager, a Channel Manager and a Major Account Manager. Marcus’ Twitter handle is @marcusdoran.  Rahi Systems’ Twitter handle is @Rahi_Systems.

Posted on Leave a comment

Blockchain Tech Provider Bitfury Gains Government Approval For $35 Million Norway Datacenter

Originally posted to Coin Telegraph by William Suberg.

Blockchain technology company Bitfury announced it will open an “energy efficient” datacenter in Norway March 20 in a deal with the blessing of the government.

In a blog post, the business confirmed it would open two sites around the town of Mo i Rana, investing 274 million kronor ($35 mln) in infrastructure and hiring 30 employees.

The move comes at a time when Bitcoin mining in particular is under scrutiny for its environmental impact and wasteful manufacturing process.

Commenting on the datacenter, Norway’s Minister of Trade and Industry Røe Isaksen said he was “delighted” Bitfury had opted to set up in the country.

To read the full article, please click here.

Posted on Leave a comment

NVMe-oF Explained for the Storage Industry by David Woolf, Senior Engineer, Datacenter Technologies, UNH-IOL

Vertiv Develops Solutions to Edge Computing Trends 3

NVMe, or Non-Volatile Memory Express, is a streamlined protocol specifically designed for flash memory. Due to a lightweight protocol, the storage controller can be greatly simplified, relative to a legacy SCSI (Small Computer Systems Interface) style storage controller, and thus latency is reduced and performance optimized. NVMe also leverages the widely-adopted PCIe (Peripheral Component Interconnect Express) interface as the physical transport mechanism. This combination is what makes NVMe protocol so attractive.

PCIe, however, has its own limitations, especially when trying to create large pools of flash storage. While storage array nodes can be connected using external PCIe, it’s not a scalable solution.

NVMe-oF extends the capability of NVMe by allowing multiple storage array nodes to be connected over a fabric. Connecting over a fabric provides many benefits, such as redundant connections, traffic management, and the creation of very large pools of storage.

Of course, creating large pools of storage is not new. Fibre Channel and SAS have done this quite effectively for many years. What is new is the ability to create large pools of flash storage using the streamlined NVMe protocol.

Just as NVMe is used as a protocol over PCIe within a server or storage array, NVMe is used as a protocol over the fabric interface between storage arrays. The primary fabric technologies getting traction today are RoCE (RDMA over Converged Ethernet) and Fibre Channel. When implementing NVMe-oF solutions, interoperability and conformance to NVMe standards is important to bring these technologies to maturity and market faster. All the components for building an NVMe-oF solution, including drive enclosures, host bus adapters (HBA), switches, and the internal NVMe storage devices need to be tested for interoperability.

There is an ongoing effort to bring some SCSI-like services and management to NVMe while maintaining the relatively lightweight protocol that has enabled these high-performance, low latency drives. Striking a balance in deployment of these services will be key to keeping NVMe speedy and lightweight.

NVMe has been very successful in the PC space. The introduction of NVMe-oF will accelerate the adoption of NVMe in the data center.

About the Author

David Woolf is the Senior Engineer, Datacenter Technologies at the University of New Hampshire InterOperability Laboratory (UNH-IOL). He has developed dozens of industry-reviewed test procedures and implementations as part of the team that has grown the UNH-IOL into a world-class center for interoperability and conformance testing. David has also helped to organize numerous industry interoperability test events at both at the UNH-IOL facility and off-site locations. He has been an active participant in a number industry forums and committees addressing conformance and interoperability, including the SAS Plugfest Committee, SATA-IO Logo Workgroup, co-chair of the MIPI Alliance Testing Workgroup, and coordinating the NVMe Integrators List and Plugfests.

Posted on Leave a comment

R.I.P., XaaS: Users Must Not Drown in Alphabet Soup by Adam Stern, Founder and CEO of Infinitely Virtual

Vertiv Develops Solutions to Edge Computing Trends 4

I take no satisfaction in accurately predicting that the defining trend for 2017 would be the weaponization of the Internet of Things. After WannaCry and NotPetya slammed unsuspecting, unprotected organizations with epic ransomware attacks last year, I’m aware that prescience is not always its own reward.

As we enter the second month of 2018, I’m embracing what the IT establishment might regard as a contrarian forecast.  I’m referring to the industry’s infatuation with XaaS.

Regarding XaaS, that suddenly ubiquitous mnemonic, the big kahunas are all-aflutter.  “Accenture is all in,” the company says. “There’s a new era of service delivery and you don’t want to be left behind. Now’s the time to transition to as-a-Service’… to innovate faster, drive revenue and reduce costs.”

That lovefest has extended to IT pundits as well. TechTarget put it this way: “What is XaaS (anything as a service)?  XaaS is a collective term said to stand for a number of things including ‘X as a service, ‘anything as a service’ or ‘everything as a service.’ The acronym refers to an increasing number of services that are delivered over the Internet rather than provided locally or on-site.  XaaS is the essence of cloud computing.”

Except that it isn’t.

Call me a curmudgeon but I’m considerably less smitten with XaaS, which, as an organizing model for IT and cloud computing, doesn’t merely suffer from diminishing returns.  It epitomizes no returns.  A year from now, XaaS will be on fumes.

My bête noire here is “TaaS” – “Technology as a Service.”  For my money, it represents the nadir among four-letter handcuffs.

It isn’t a matter of being snarky to diss this pointless acronym stew.  These initials hamstring the creativity of the people who are devising needed services. As an industry, we’re trying to solve business problems, and that should be far more important than figuring out which bucket anyone fits into.  2018 will be the year when we realize that it’s ridiculous to silo off these industries and sub-industries,  and finally stop feeding a new cottage industry comprised of initials that classify everything.  Meanwhile, actual solutions are becoming so varied and so interconnected that no snappy, four-letter template will be capable of describing actual deliverables.

The push to reduce and simplify is being driven by a combination of marketing gurus who are unfamiliar with the technology, and industry analysts who believe everything can be plotted on a two-dimensional graph.  Service providers are trying to deliver products that don’t necessarily fit the mold, so it’s ultimately pointless to squeeze technologies into two or three dimensions. These emerging solutions are much more nuanced than that.

We make an assumption that one infrastructure service company is the same as another, and that because they’re IaaS, they all must do the exact same thing, their feature sets are interchangeable, and the underlying architecture is immaterial.  The message is it doesn’t matter what equipment they’re using and it doesn’t matter what choice you make.  But in fact, it does.  Never mind the analysts; cloud computing is not a commodity business.  And never mind the Street; investors and certain others fervently want it to be a commodity, but because those certain others go by the name of Microsoft and Amazon, fuzzing the story won’t fly.  They want to grab business on price and make scads of money on volume.  They win with one-size-fits all.

The year ahead offers real opportunity, if vendors finally level with users.  The devil really is in the details.  There are literally hundreds of decisions to make when architecting a solution, and those choices mean that every solution is not a commodity.  The sooner we finish off the alphabet soup, the sooner we can get to the meat of the matter. Digital transformation, if happens in 2018, won’t emerge from some marketing contrivance, but from technologies that make cloud computing more secure, more accessible and more cost-effective.

About the Author

Adam Stern is founder and CEO of Infinitely Virtual in Los Angeles. Visit or contact @IV_CloudHosting.