Faith, Family and Startups: Why we chose Atlanta for our startup
It was the fall of 2014 and we were finally ready. Nearly 10 years after getting my MBA in Entrepreneurship from Southern Methodist University in Dallas, we had finally built up enough savings and it was time to leave the corporate world to embark on the new, uncertain path to building a new company.
Originally posted on the Bridge community blog: BridgeCommunity.com/blog
It was the fall of 2014 and we were finally ready. Nearly 10 years after getting my MBA in Entrepreneurship from Southern Methodist University in Dallas, we had finally built up enough savings and it was time to leave the corporate world to embark on the new, uncertain path to building a new company. But my home in San Antonio, TX wasn’t the right fit for the new venture – lacking in both a strong personal and startup network – we needed to move.
But where? Since I’m in the cloud computing space, the Bay area in CA is an obvious first choice, but after looking at how quickly we would burn up our savings, it seemed too risky. We could go back to Dallas and I could make great use of all the school and business connections there, but the tech startup scene is only just emerging and more importantly, our family was really feeling a need to reconnect with our roots in the South – so we took a look at Atlanta.
At first glance, the main redeeming qualities for Atlanta were the major airport and the relatively reasonable cost of living. I figured worst case, the flights to San Francisco and Seattle would be reasonable and we’d be within a day’s drive from our families in Alabama and Florida. But after beginning the process of inserting our family into the fabric of our local community we found that, almost by luck, we had found the ideal place for us to start our tech company.
Soon after moving to Alpharetta, the fastest growing large city in GA and the home of great schools for our 2 sets of twins (as well as a number technology companies large and small), we joined Georgia Tech’s Advanced Technology Development Center (ATDC). Our membership turned out to be a great way to get engaged with Atlanta’s technology community and an amazing source of resources.
I quickly realized that Atlanta had a vibrant technology startup scene with Tech Square, Atlanta Tech Village, the Tech Alpharetta Innovation Center and lots of startups – especially in the Mobile, Fintec, Security, and, increasingly, the IoT space. And unlike my experience in the Bay area, there’s a strong faith based component to our technology communities with organizations like High Tech Ministries, which was an unexpected but welcome bonus to being part of the Atlanta technology scene. The other thing we have going for us is the large number of innovative large companies in Atlanta, companies like Coca-Cola, SunTrust, The Weather Company, Intercontinental Hotel Group, UPS, Cox Enterprises, Delta, NCR and many others that could serve as early adopters of new technologies.
Atlanta’s strong community of large companies is a clear benefit for B2B startups focused on businesses, but there’s a catch – corporate cultures can be risk averse, slow to move and difficult to navigate, and this is a real challenge for startups. Enter Coca-Cola’s BridgeCommunity. This program, sponsored by 7 Atlanta companies, was created to bridge the gap by partnering startups with Fortune 500s for pilots and the commercialization of new technologies. It wasn’t easy for us to get in – we went through 2 rounds of selection, and out of more than 200 startups, we were one of just 22 selected. Now, as a BridgeCommunity startup, we are in a great position to build deep corporate relationships and potentially land one of Atlanta’s giants as our customer – a win that would catapult our startup to the next level.
Atlanta provides a unique place for startups. For us, the combination of faith and family, the vibrant startup scene, and the strong corporate business economy have proven to be an ideal fit. I know that we’re losing large numbers amazingly talented Ga Tech, Ga State, KSU, Emory and other grads to competing tech hubs, but as they mature and want to start families while still participating in world class innovation, they’ll realize that coming home to Atlanta is a great option. Our company is still early, but with the help of our Atlanta community and the BridgeCommunity, Convergent is in great shape to be a huge success.
Everything is data
While Amazon, Facebook, Google and other “digital native” companies rapidly launch new products and services with a modular, automated, standardized approach (Devops/Agile), traditional companies increasingly struggle to compete because they can’t take advantage of actionable data being held hostage by traditional and SaaS software vendors, legacy systems, and business silos.
Some don't realize it yet, but everyone now competes with the digital natives.
While Amazon, Facebook, Google and other “digital native” companies rapidly launch new products and services with a modular, automated, standardized approach (Devops/Agile), traditional companies increasingly struggle to compete because they can’t take advantage of actionable data being held hostage by traditional and SaaS software vendors, legacy systems, and business silos.
Data is the most valuable resource of the 21st century.
Digital natives understand this. As they use data to control more and more of the customer experience, they can extract more margin from companies that produce the products.
Do we really know who our digital competition is? Here are a few examples:
- Hotels/hospitality: Online travel agents (Priceline.com), Airbnb
- Professional Services, Healthcare, Dining: Yelp, Google, Opentable, Healthgrades.com
- Retail: Amazon, Google
- Marketing: Google and Facebook
- Consumer product branding and distribution: Amazon, Google Facebook
- Logistics and Transportation: Fulfilled by Amazon, UBER
- Media: Facebook, Amazon, Netflix, Google
Case Study: Rackspace
When I joined Rackspace in 2011, we were in a tight race with Amazon for leadership in the cloud; I ran the product team that grew the VMware practice to be one of Rackspace's top businesses. But we fell behind. We couldn’t launch new features fast enough to keep pace with Amazon.
We had neither the time or resources to replace or fix all the legacy systems laden with technical debt. There was lots of infighting between teams with ownership of the various systems. It was excruciating and got worse over time.
Even when we did launch, new services were often anemic, buggy, and late. I saw my enterprise customers struggle with the same challenges – Migrating applications to the cloud was expensive, time consuming and risky. Upon leaving Rackspace, I realized that the problem wasn’t technical debt, but process and architecture.
Rebuilding the Ship at Sea
So how do traditional companies create new customer experiences to compete with digital natives? The solution is to liberate data with integration that applies a lean, agile, and devops driven methodology to a highly automated, reconfigurable, and open architecture.
To be successful, the technology and process driven approach must avoid re-platforming most legacy apps, future proof the enterprise by providing companies ongoing control over their data, provide for automated release of changes as small as a single data field, and minimize risk by providing the capability to easily prototype use cases.
This approach sets the ground work for the digital enterprise that automates, rapidly reconfigures and scales new use cases for rich data, analytics, IoT and AI.
This is no small task; it requires a strategic view of data and a new unified approach that covers not just what we think of as traditional data, but also integrates and automates across all infrastructure, software, workflows and policies, both on and off the cloud.
Because point solutions make integrating data and scaling use cases harder not easier, a unified, standardized approach to data management across all types of data is a must to support reconfigurability, automation and technical debt avoidance.
The ability to reconfigure data and applications on the fly is required to support new business models and changing competitive situations. An architecture with open source in the data integration layer is required to support this. Proprietary systems, by their very nature, hold data hostage to maximize profits, and prevent companies from having the flexibility needed to adjust to the market.
And finally, the architecture must be purpose designed from the ground up to support enterprise grade features like security, audit and compliance. We can compete with the digital natives and win, but we need the right data strategy, and the right platform to execute that strategy.
We've created the The PrivOps Matrix to help you get started with this approach by piloting digital integration use cases. You can deploy the integration fabric to the cloud in a few hours and prototype new integration use cases in 1 to 2 months. It’s completely automated, self-deploying to the cloud, and the architecture is designed from the ground up to scale-up across multiple clouds and datacenters employing both Agile and Devops methodologies. It’s also built on open source software meaning you own your data without having to worry about predatory software vendors
Your projects requiring integration would benefit enormously from this new approach, so let’s pilot ways to be more agile like the Amazons and Googles of the world.
-Tyler
Set your data free, set your company free
PrivOps selected for 2017 Bridge Community
Great news to share! We’ve been selected as one of the 22 startups in the BridgeCommunity’s 2017 program cohort.
Great news to share! We’ve been selected as one of the 22 startups in the BridgeCommunity’s 2017 program cohort. To say we’re thrilled would be an extreme understatement. This program connects us to Fortune500’s looking to partner and pilot with startups to solve their most pressing issues. The amazing line-up of enterprises we get the opportunity to connect with over the next 6 months includes: Capgemini, Coca-Cola, COX Enterprises, InterContinental Hotels Group, Porsche Financial Services, SunTrust Bank, The Atlanta Hawks/Philips Arena, and The Weather Company.
The BridgeCommunity’s sole objective is to help startups find the right customer inside the walls of large corporations. Based on the results from their first cohort last year (10 participating startups with 9 proof-of-concepts and pilots created), we’re in an excellent position to make strong corporate connections and land one or more deals within the program.
What Roosevelt's "Man in the Arena" quote & Jesus have to say about innovation & thought leadership
Many are the challenges of being heard in an increasingly noisy, Internet driven marketplace of ideas.
If you’re interested in technology innovation or our mission to empower people through data, check us out! Click here for more
Not too long ago, I came across one of Brené Brown's masterful TED talks where she referenced Theodore Roosevelt's famous "Man in the Arena" quote: (re-posting here since it's great prose)
"It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat."
In the last few years, you may have noticed there's a growing trend for top technologists publishing books related to their craft as a way to achieve thought leadership in their respective fields. Makes sense:
Many are the challenges of being heard in an increasingly noisy, Internet driven marketplace of ideas.
There's nothing wrong with writing books of course (I read them avidly as it turns out), but there's a fundamental disconnect between "thought leadership" and innovation. As Brené Brown puts it:
"Vulnerability is the birthplace of innovation, creativity and change."
While putting Roosevelt and the queen of vulnerability together might seem antithetical, they do make the same point:
Innovation (or any change for that matter) requires risk.
Thought leadership as it turns out is about building your personal brand and establishing your credentials. You need other thought leaders to broadly share your insights and to present yourself as an expert. Controversial ideas, authenticity, and vulnerability can and often do get in the way of establishing your brand. As it turns out:
Risk (and innovation) is antithetical in many ways to the current model of thought leadership.
What do you do? There is no right answer, but for me the answer was stop worrying about appearances, enter the arena, and that means becoming vulnerable. I have given up the safety of a securing a middle class lifestyle by leaving a lucrative job and re-learning how to code in order to build an unproven software platform that no one may ultimately pay for until proven otherwise. At the same time, I get to watch our family's savings dwindle along with the prospect of a comfortable retirement.
Jesus also has something to say on this topic:
Matthew 6:25 "Therefore, I say to you, don’t worry about your life, what you’ll eat or what you’ll drink, or about your body, what you’ll wear. Isn’t life more than food and the body more than clothes?"
I thought about this quite a bit earlier this year as we laid my mother to rest. At the end of my days, will it matter to me how much money I made or how comfortably I lived? No, what will matter is the impact on those around me, and how I tried to improve the world, be it successful or not. So for me, the question of should I focus on writing (note this is my first post in months) or coding is clear:
I choose to code.
I have faith that if I work hard enough for long enough, it will be enough. It's a work in progress though. It was hard when I quit coding last night at 2AM after finally solving a bug more experienced coders might have found much sooner in software nobody's decided to pay for (yet). Doubt creeps in...
At any point in time, you might look at your results and say "that's not world class" or "that's not good enough", but I'm going to keep going anyway, keep improving, keep trying: And let God be the judge of if it's good enough.
I know He'll forgive me if it's not... (although my broke family may not :-)
Matthew 6:21 "Where your treasure is, there your heart will be also."
If you’re interested in technology innovation, check us out! Click here for more
-Tyler
Top 12 Quotes from Bezos' 2016 Letter to Shareholders
At over 4000 words, Jeff Bezos' 2016 letter to Amazon shareholders (posted last week) has a lot to say. While I highly recommend tech executives and investors read the entire thing, here are my top ten excerpts from the letter:
At over 4000 words, Jeff Bezos' 2016 letter to Amazon shareholders (posted last week) has a lot to say. While I highly recommend tech executives and investors read the entire thing, here are my top ten excerpts from the letter:
1. Our growth has happened fast. Twenty years ago, I was driving boxes to the post office in my Chevy Blazer and dreaming of a forklift.
2. This year, Amazon became the fastest company ever to reach $100 billion in annual sales. Also this year, Amazon Web Services is reaching $10 billion in annual sales … doing so at a pace even faster than Amazon achieved that milestone.
3. AWS is bigger than Amazon.com was at 10 years old, growing at a faster rate, and – most noteworthy in my view – the pace of innovation continues to accelerate – we announced 722 significant new features and services in 2015, a 40% increase over 2014.
4. Prime Now … was launched only 111 days after it was dreamed up.
5. We also created the Amazon Lending program to help sellers grow. Since the program launched, we’ve provided aggregate funding of over $1.5 billion to micro, small and medium businesses across the U.S., U.K. and Japan
6. To globalize Marketplace and expand the opportunities available to sellers, we built selling tools that empowered entrepreneurs in 172 countries to reach customers in 189 countries last year. These cross-border sales are now nearly a quarter of all third-party units sold on Amazon.
7. We took two big swings and missed – with Auctions and zShops – before we launched Marketplace over 15 years ago. We learned from our failures and stayed stubborn on the vision, and today close to 50% of units sold on Amazon are sold by third-party sellers.
8. We reached 25% sustainable energy use across AWS last year, are on track to reach 40% this year, and are working on goals that will cover all of Amazon’s facilities around the world, including our fulfillment centers.
9. I’m talking about customer obsession rather than competitor obsession, eagerness to invent and pioneer, willingness to fail, the patience to think long-term, and the taking of professional pride in operational excellence. Through that lens, AWS and Amazon retail are very similar indeed.
10. One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins.
11. We want Prime to be such a good value, you’d be irresponsible not to be a member.
12. As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention
Why cloud IT providers generate profits by locking you in, and what to do about it - Part 2 of 2
In the first article of this series, I spoke about how and why cloud IT providers use lock-in. I'll briefly revisit this and then focus on strategies to maintain buyer power by minimizing lock-in.
In the first article of this series, I spoke about how and why cloud IT providers use lock-in. I'll briefly revisit this and then focus on strategies to maintain buyer power by minimizing lock-in.
If you asked Kevin O'Leary of ABC's Shark Tank about customer lock-in with cloud providers , he might say something like:
"You make the most MONEY by minimizing the cost of customer acquisition and maximizing total revenue per customer. COCA combined with lock-in lets them milk you like a cow."
In short, they want to make as much profit as possible. So what do you do about it?
1. Avoid proprietary resource formats where possible. For example, both AWS and VMware use proprietary virtual machine formats. Deploying applications on generic container technologies like Docker and Google’s Kubernetes means you’ll have a much easier time moving the next time Google drops price by 20%
2. Watch out for proprietary integration platforms like Boomi, IBM Cast Iron, and so on. The more work you do integrating your data and applications, the more you’re locked in to the integration platform. These are useful tools, but limit the scope each platform is used for and have a plan for how you might migrate off that platform in the future.
3. Use open source where it makes sense. Like Linux, Openstack, Hadoop, Apache Mesos, Apache Spark, Riak and others provide real value that helps companies develop a digital platform for innovation. The problem is that talent is a real problem with opens source and much of the tech is still immature. Companies like Cloudera can mitigate this but they have their own form of lock-in to watch out for in the form of “enterprise distributions”.
4. Don’t standardize across the enterprise on proprietary platforms like MSFT Azure. Period. But don’t be afraid to use proprietary platforms for specific high impact projects if you have significant in house talent that aligned to that platform – while building expertise around alternatives like Cloud Foundry.
5. Make sure your vendors and developers use standards based service oriented approach. Existing standards like JSON, WSDL, Openstack APIs and emerging standards like WADL, VMAN, CIMI should be supported to the extent possible for any technology you choose to adopt.
Why cloud IT providers generate profits by locking you in, and what to do about it - Part 1 of 2
Here are two predictions for differences in industry evolution between Cloud IT and 20th century autos:
- The Cloud IT sector will consolidate an order of magnitude faster (perhaps two) than the auto industry did
- Future Cloud IT oligopolists will work to maximize customer retention by maximizing switching costs (creating lock-in)
The comparison has been made before, but the Cloud IT sector today looks in many ways much like the automotive industry of the early 20th century.
During the first decade of the 20th century, more than 485 auto companies entered the market. After that explosive 1st decade a massive wave of consolidation followed, eventually resulting in dominance of just a few companies. This type of industry evolution - technological innovation at the outset, followed by a wave of commoditization, and then finally consolidation is common in markets with economies of scale. There will be differences though.
Here are two predictions for differences in industry evolution between Cloud IT and 20th century autos:
- The Cloud IT sector will consolidate an order of magnitude faster (perhaps two) than the auto industry did
- Future Cloud IT oligopolists will work to maximize customer retention by maximizing switching costs (creating lock-in)
Cloud IT providers are subject to the same business rules as businesses in any other sector. One of those rules is that, if you want to make long term sustainable profits (or “economic” profits for you MBA types), you must acquire customers and retain them. Technology driven value add (sustainable competitive advantage) can help acquire customers, but as more and more IT becomes commoditized , customer switching costs start to drive the pricing required for cloud providers to sustain profits.
While auto manufactures used economies of scale, styling and brand loyalty to generate economic profits, cloud IT providers have to deal with a steeper commoditization curve (in part driven by open source) and a lower threshold (but still large) for reaching economies of scale (minimum point of the long-run average cost curve). What this means is that commoditization happens more quickly as providers reach scale more quickly. In this environment, a loss leader plus high switching cost strategy makes a great deal of sense.
If you think about it for a moment, the prior generation of IT leaders (i.e. HP MSFT, IBM, CISCO, DELL, EMC, ORACLE, SAP) used a number of techniques to lock in their customers, so why not the next generation of IT leaders?
How cloud IT providers lock you in
One of the key needs of 21st century enterprises is to be able to service customers better in the digital realm. To do that they need to make sense of and monetize their data, all their data. That data lives in lots of different systems and has to be integrated to be useful. Cloud services typically provide APIs (application programming interfaces) to make this possible.
Lock-in method one: Enter the API
In the age of devops where everything in IT is being automated, cloud services are no exception. By using cloud service APIs, different data sets and applications can be connected in useful ways. An example use case is to use customer purchase information and social media activity to create a better support experience, manage inventory, and optimize pricing in real time.
But the more you automate, the more it costs to switch providers.
Cloud IT providers’ APIs are not standard in general, and often they’re highly complex For example, VMware had at least 5 distinct APIs for it vCenter application alone at one point. Lack of standards and complexity create additional rework, add to training and recruiting costs, and add significant time and risk to any transition to another provider. This is true for any cloud provider that has a product with an API.
In other words, complex, non-standard APIs drive profits for cloud providers.
Lock-in method two: Data Gravity
Data Gravity, a term coined by Dave McCrory (@mccrory) the CTO at Basho Technologies is a term used to describe the tendency for applications to migrate toward to the data as it grows. The idea is that as data grows, the difficulty of moving it to applications that use it becomes more difficult for several reasons, among them network costs, difficulties in accurately sharding the data, and so on.
Data is growing faster than bandwidth.
There are two power law effects (similar to Moore’s law) going on with respect to data. First, driven by trends like social media and the Internet of Things (IoT), enterprise data is growing at an astronomical rate. Second, network bandwidth is demonstrating robust growth as well, but is still growing much more slowly than data growth. The implication?
As time goes on, it gets harder to move your data from one provider to another. Amazon was likely thinking about this when they launched AWS Import/Export Snowball at Re:Invent last fall.
Like the proverbial drug dealer, the business model for many providers is to provide small scale services heavily discounted or free as part of the cost of customer acquisition. While cloud is easy to consume, as scale grows, cost grows not just in absolute terms but in percentage terms.
In my next post, I’ll delve into the best practices that help you avoid (or at least minimize) cloud lock-in.
If you found this post informative, please share it with your network and consider joining the discussion below.
For more insights, sign up for updates at www.contechadvisors.com or follow me on Twitter at @Tyler_J_J
The Virtualization and Cloud Efficiency Myth
By allowing administrators to partition up underutilized physical servers into ‘virtual’ machines, they could increase utilization and free up capital. Unfortunately that hasn’t happened for the most part. It’s a poorly held secret that server utilization in enterprise datacenters is much lower than most people think as virtualization reaches saturation with about 75% of x86 servers now virtualized.
At the beginning of the 20th century in the US, life was difficult for all but the upper class. While 80% of families in the US had a stay at home mother, the hours were grueling for both parents in the family. Technology driven innovation promised to change all that by making the life of a homemaker much more efficient and easier.
One can argue though that life didn’t become easier – while new technologies like refrigerators, vacuum cleaners, dish washers and washing machines might make life better, people seem to be working just as hard if not harder than in ages past. Technology increased the capacity for work, but instead of increasing leisure time, that excess capacity just shifted to other tasks. Case in point: The number of singer earner families dropped from 80% in 1900 to 24% in 1999.
Is this a good thing? Who knows, but if our lives aren’t better, it certainly isn’t the technology’s fault.
What does this have to do with virtualization and cloud?
Way back in 1998 when VMware was founded, virtualization presented a similar promise of ease and efficiency. By allowing administrators to partition up underutilized physical servers into ‘virtual’ machines, they could increase utilization and free up capital. Unfortunately that hasn’t happened for the most part. It’s a poorly held secret that server utilization in enterprise datacenters is much lower than most people think as virtualization reaches saturation with about 75% of x86 servers now virtualized.
Conversation between Alex Benik, Battery Ventures and a Wall Street technologist:
Alex: Do you track server and CPU utilization?
Wall Street IT Guru: Yes
Alex: So it’s a metric you report on with other infrastructure KPIs?
Wall Street IT Guru: No way, we don’t put it in reports. If people knew how low it really is, we’d all get fired.
Cloud isn’t any better. A cloud services provider I recently worked with found that over 70% of virtual machines customers provisioned were just turned on and left on permanently, with utilization under 20%. Google employees published a book with similar data. Not very ‘cloudy’ is it?
At this point, most cloud pundits would suggest a technological solution, like stacking containers or something…
Four reasons why virtualization and cloud don’t drive significantly better utilization
Jevon’s Paradox
Jevon’s paradox holds that as a technology increases the efficiency of using a resource, the rate of consumption of the resource accelerates. Virtualization and cloud make access to resources (i.e. servers, storage, etc.) easier. The key resource metric to think about here is from the user’s perspective: i.e. the number of workloads (not cpu cycles or GB of storage, etc.) This leads to server sprawl, VM sprawl, storage array sprawl etc. and hurts utilization because the increasing number of nodes make environments more complex to manage.
Virtualization vendors and cloud service providers don’t want you to be efficient
Take a look at licensing agreements from virtualization vendors. Whether it’s a persistent or utility license per physical processor, RAM consumed, per host, per VM, per GB, it doesn’t matter – the less efficient you are the more money they make. Sure companies like VMware, Amazon and Microsoft provide capacity management and optimization tools and they may even make them part of standard bundles, but your account team has a negative incentive for you to use them. Is that why they didn’t help with deploying the tool? And let’s be honest, if better usability reduces revenue, how much investment do you think the vendors are putting into user experience? Cloud is no better – if you leave all your VMs on and leave multiple copies of your data sitting around unused, does Amazon make more or less money? That’s why 3rd party software from vendors like Stratacloud and Solar Winds are important. Beware of capacity management solutions from the hardware, virtualization, and service providers, chances are they’re bloatware unless there is a financial incentive.
IT organizations don’t reward higher utilization
Okay, maybe this has been acceptable in the past but that’s changing. In an era of flat or declining IT budgets and migration of IT spending authority to other lines of business, spending valuable resources and time on capacity optimization has been pushed way down on the list of priorities. While meeting budget is an important KPI, utilization is typically not. Leadership has also become leery of ROI/TCO analysis, and rightly so with IT project failure rates resulting in organizations losing an average of US$109 million for every US$1 billion spent on projects. It’s not just about buying a tool to improve efficiency, application architectures and processes also need rework – all of this creates risk from an IT perspective.
Application architectures and processes need rework
Like in the early 20th century example above, better technology driven efficiency doesn’t necessarily help people achieve their objectives. Without improvements in processes (and organizations) better technology can lead to unintended effects (i.e. virtualization sprawl). As organizations acquire new skills - building application architectures that take advantage of cloud services, microservices, etc. this will change over time. But the pace of change will still be governed by organizational and process change, not technology change.
Software defined ‘X’
Many have heard about software defined networking (SDN), software defined datacenters (SDDC), network functions virtualization (NFV), and so on. At its core, these technologies are all about automation and ease of deployment. What we’ve found so far is that for the reasons above, this greater efficiency in provisioning new environments is likely to increase entropy, not decrease it. Only by making the needed changes in an organizations structure and processes, will that complexity be manageable. And this type of change will be much slower in coming than the technology itself.
Do you agree or disagree with any of the point I've made? Let's have that discussion in the comments below.
Why is Uber Investing in Data Centers instead of Running Everything in the Cloud?
Avoiding assets by owning no cars is a key part of Uber’s strategy, so why would Uber invest in data centers?
You might think that Uber, AirBnB, Pinterest, and all the other unicorns of the cloud native age are all about cloud; the notion of of building data centers is old school.
Not so fast
Avoiding assets by owning no cars is a key part of Uber’s strategy, so why would Uber invest in data centers? (Links here and here)
1. Data is a major source of competitive advantage (when used strategically).
Google understands that, Facebook understands that. What else do they have?
Data centers
Uber’s digital platform collects an incredible amount of data: Mapping information, our movements, preferences, connections are just a few of the elements in Uber’s data stores. This amount is massive, unique to Uber, and when combined creatively with other sources of data becomes a competitive weapon. Uber, like many others, is actively investing in developing additional capabilities, many of them digital; data is the critical piece underlying that strategy.
2. Data Gravity is at work
Data Gravity, a term coined by Dave McCrory (@mccrory) the CTO at Basho Technologies is a term used to describe the tendency for applications to migrate toward to the data as it grows. The idea is that as data grows, the difficulty of moving it to applications that use it becomes more difficult for several reasons, among them network costs, difficulties in accurately sharding the data, and so on. As time go on, data sets get bigger, and moving it becomes more difficult, leading to switching costs & lock in.
Because data is such a strategic asset to enterprises with digital strategies, it makes sense that they would build their own infrastructure around it (not the other way around. New digital capabilities in Social, Mobile, Analytics, and Cloud will certainly utilize different services providers, but data will continue to be central.
Amazon was the first to figure this out, but the list of companies utilizing a data first digital strategy is growing. Most recently, GE announced its Predix cloud service, a clear indication that they intend to launch many specialized cloudy IoT (or Industrial Internet) centric offerings that augment their existing businesses and create customer value and intimacy across both the physical and virtual world. As a $140+ Billion company that likely spends at least $5+ Billion on IT, it makes no sense that they would look to leverage a service provider, given that their internal IT services are larger than any service provider except for AWS.
GE won’t be the only one. In the not to distant future a big chunk of the Fortune 500 will have and internal, multi-tenant public cloud that delivers APIs, platforms, software and other cloudy services tightly aligned with their other offerings. Basically the definition of a Digital Enterprise.
Most people think Hybrid Cloud is about connecting private clouds to public clouds. I think private clouds will eventually be multi-tenant, meaning more and more companies like Uber and GE (yes I did use them both in the same sentence) will start to look just a little more like Amazon and Google.
Do you agree or disagree? Let's have that discussion in the comments below.
Is Cloud an "Enabler" or "Dis-abler" for Disaster Recovery?
While most people working for cloud providers (I used to work at one) will tell you that Disaster Recovery is a great use case for cloud, our panelists weren't so sure. The feeling in the room is that utilizing cloud environment in addition to traditional on premise environments created a bunch of operational complexity and it was safer to keep both production and DR in-house.
Yesterday in Atlanta I had the honor of moderating a Technology Association of Georgia (TAG) panel discussion on Operational Residence and the topic of cloud computing came up. While most people working for cloud providers (I used to work at one) will tell you that Disaster Recovery is a great use case for cloud, our panelists weren't so sure. The feeling in the room is that utilizing cloud environment in addition to traditional on premise environments created a bunch of operational complexity and it was safer to keep both production and DR in-house.
So which is it? Cloud providers are clearly making money selling DR services, but managing hybrid on-premise, cloud DR is difficult and challenging and you may not get the results you expect.
Are IT managers signing for cloud based DR services just to check a box, not knowing if it'll work when it's needed? Perhaps....
The reality is that when done correctly, cloud based DR services can help companies protect their operations and mitigate risk - but it's not easy.
For anyone tasked with developing an IT disaster recovery plan as part of their company’s business continuity plan, the alphabet soup of DR options talked about today by service providers, software vendors, analysts and pundits is truly bewildering. Against this backdrop, analysts like Gartner are predicting dramatic growth in both the consumption and hype of “cloudwashed” DR services.
I agree.
With the lack of standardization, it’s increasingly complex to map DR business requirements to business processes, service requirements and technology. Given this, how do you make sense of it all? For one thing, it’s critically important to separate the information you need from the noise, and the best way to do that is:
Credit: Stuart Miles, freedigitalphotos.net
Start with the basics of what you’re trying to do – protect your business by protecting critical IT operations, and utilize new technologies only where they make sense. Here are some things to think about as you consider DR in the context of modern, “cloudy” IT.
There is no such thing as DR to the cloud (even though cloud providers claim "DR to the Cloud" solutions).
There’s been a lot made lately about utilizing cloud technology to improve the cost effectiveness of Disaster Recovery solutions. Vendors, analysts, and others use terms like DRaaS, RaaS, DR-to-the-Cloud, etc. to describe various solutions. I’m talking about using cloud as a DR target for traditional environments, not Cloud to Cloud DR (that’s a whole other discussion).
There’s one simple question underlying all this though: If, when there is a disaster, these various protected workloads can run in the cloud, WHY AREN’T they there already?
Getting an application up and running in on a cloud is probably more difficult in a DR situation than if there isn’t a disaster. If security, governance, and compliance don’t restrict those applications from running in a cloud during a DR event, they should be considered for running in the cloud today. There are lots of other reasons for not running things in the cloud, but it’s something to consider.
You own your DR plan. Period.
Various software and services out there provide service level agreements for recovery time and recovery point objectives, but that doesn’t mean that if you buy it, you have DR. For example, what exactly does the word recovery mean? Does it mean that a virtual machine is powered up or that your customers can successfully access your customer support portal? The point is, except in the case of 100% outsourced IT, only your IT department can oversee that the end to end customer (or employee in the case on internal systems) processes will be protected in the case of disaster. There are lots of folks that can help with BIAs, BCDR planning, hosting, etc. that provide key parts of a DR solution, but at the end of the day, ultimate responsibility for DR lies with the IT department.
Everyone wants DR, but no one wants to pay for it.
I’ve had lots of conversations with and inquiries from customers asking for really aggressive DR service levels, and then when they hear about how much it’s going to cost, they back away from their initial requirements pretty quickly. The reality is that as objectives get more aggressive, the cost of DR infrastructure, software, and labor begins to approach the cost of production – and few businesses are able to support that kind of cost. Careful use of techniques like using test/dev environments for DR, global load balancing of active/active workloads, less aggressive recovery time objectives can drive the cost of DR down to where it should be (about 25% of your production environments’ cost), but be skeptical of any solutions that promise both low cost and minimum downtime.
Service provider’s SLA penalties never match the true cost of downtime.
Ok, let’s be honest - Unless you’re running an e-commerce site and you can measure the cost of downtime, you probably don’t know the true cost of downtime. Maybe you hired an expensive consultant and he or she told you the cost, but that’s based on an analysis with outputs highly sensitive to the inputs (and those inputs are highly subjective).
But that doesn’t mean that service provider SLA penalties don’t matter. Actually, strike that – service provider penalties don’t matter.
A month of services or some other limited penalty in the event of missing a DR SLA won’t compensate for the additional downtime. If it did, then why are you paying for that stringent SLA in the first place? The point here is that only a well thought out and tested DR strategy will protect your business. This leads me to my last point.
You don’t have DR if you don’t regularly test.
A DR solution is not “fire and forget”. To insure that your DR solution works, I recommend that you test at the user level at least quarterly. DR testing is also a significant part of the overall cost of DR and should be considered when building your business case. I’m sad to say, many of my customers do not test their DR solutions regularly (or at all). The reasons for this are many, but in my opinion, it’s usually because the business processes and metrics were never implemented by an initiative driven exclusively by IT technologists. My advice if you implement a DR solution and don’t test it: Keep your resume up to date, you’ll need it in the event of a “disaster”.