How metaDNA™ is different than microservices, containers and APIs
There is quite a bit of buzz and confusion around microservices, containers, API’s, service meshes, and other emerging technologies related to “cloud-native” application development and data integration; unfortunately, PrivOps has been caught up in the confusion. I often get questions about our (now patented) technology, specifically metaDNA™, the core of our technology, where folks try to categorize us incorrectly.
'To be clear, metaDNA™ is not an API manager (e.g. Mulesoft), a container orchestrator (e.g. openShift), a service mesh (e.g. Istio), a integration platform (e.g IBM IIB), a master data manager (e.g. Informatica), it is an entirely new category. Let me explain. (And yes, I understand that the vendors mentioned above have products that span multiple categories)
To understand metaDNA™, first we need some context. For example, the concept of a microservice is an abstraction that is a manifestation of the interplay between modularity and atomicity (i.e. irreduciblity) at the software architectural layer. There are many other other abstractions at and between other layers of the technology stack, including the interface (e.g. APIs, UIs), server (e.g virtual machine, container) the network (e.g. packets, protocols), the structural (e.g. object-oriented, functional constructs), the language (e.g. high level software instructions that abstract assembly language instructions that abstract hardware operations), and so forth.
Two important questions are:
Is there currently a modularity gap that sits between microservices (software architecture), functional programming and data structures?
Would it matter if an abstraction filled that gap?
Is there a modularity gap that sits between microservices, functional programming and data structures? The answer is yes, which is what my metaDNA™ ontology (and the metaDNA™ catalog that implements the metaDNA™ ontology) attempts to remedy. For those unfamiliar with the term ontology, it is simply a structured way of describing (or building in this case) a class of object structures and the relationships between those objects. (More on ontologies here.) Because of its ontology, the metaDNA™ catalog serves as an abstraction layer that sits between (and unifies) microservices, functional programming and data structures and constitutes an entirely new paradigm for building digital technology. metaDNA™ builds on other abstractions like microservices and containers, but doesn’t necessarily replace them. Like biological DNA, metaDNA™ objects have 4 atomic types, with uniform structures. In the same way biological DNA composes lifeforms, objects from the metaDNA™ catalog compose software components (microservices) AND data structures from EVERY type of structured data. This approach creates the opportunity for several advantages for hybrid cloud applications, including self-referential data and applications, data defined software applications that reconfigure based on context, policy driven application behavior changes, and several others.
Does it matter if an abstraction layer fills the gap between microservices (software architecture), functional programming and data structures? Absolutely, because without it, microservices based software architecture complexity growth is still exponential, even with the use of APIs, containers and service meshes. For example, the cost point to point integration among legacy and modern systems grows exponentially at the rate of ½KN(N-1) where K is the cost of each integration and N is the number of connections. Adding tools adds a similar exponential cost growth. While the modularity afforded by various solutions at the API, microservice and other layers flattens the cost curve, without addressing the modularity gap between the application and data layer the curve is still exponential and still runs into scalability problems, especially for situations like Digital Transformation that requires integration of large numbers of legacy systems and edge computing (even with 5G).
- Tyler
Top 12 Quotes from Bezos' 2016 Letter to Shareholders
At over 4000 words, Jeff Bezos' 2016 letter to Amazon shareholders (posted last week) has a lot to say. While I highly recommend tech executives and investors read the entire thing, here are my top ten excerpts from the letter:
At over 4000 words, Jeff Bezos' 2016 letter to Amazon shareholders (posted last week) has a lot to say. While I highly recommend tech executives and investors read the entire thing, here are my top ten excerpts from the letter:
1. Our growth has happened fast. Twenty years ago, I was driving boxes to the post office in my Chevy Blazer and dreaming of a forklift.
2. This year, Amazon became the fastest company ever to reach $100 billion in annual sales. Also this year, Amazon Web Services is reaching $10 billion in annual sales … doing so at a pace even faster than Amazon achieved that milestone.
3. AWS is bigger than Amazon.com was at 10 years old, growing at a faster rate, and – most noteworthy in my view – the pace of innovation continues to accelerate – we announced 722 significant new features and services in 2015, a 40% increase over 2014.
4. Prime Now … was launched only 111 days after it was dreamed up.
5. We also created the Amazon Lending program to help sellers grow. Since the program launched, we’ve provided aggregate funding of over $1.5 billion to micro, small and medium businesses across the U.S., U.K. and Japan
6. To globalize Marketplace and expand the opportunities available to sellers, we built selling tools that empowered entrepreneurs in 172 countries to reach customers in 189 countries last year. These cross-border sales are now nearly a quarter of all third-party units sold on Amazon.
7. We took two big swings and missed – with Auctions and zShops – before we launched Marketplace over 15 years ago. We learned from our failures and stayed stubborn on the vision, and today close to 50% of units sold on Amazon are sold by third-party sellers.
8. We reached 25% sustainable energy use across AWS last year, are on track to reach 40% this year, and are working on goals that will cover all of Amazon’s facilities around the world, including our fulfillment centers.
9. I’m talking about customer obsession rather than competitor obsession, eagerness to invent and pioneer, willingness to fail, the patience to think long-term, and the taking of professional pride in operational excellence. Through that lens, AWS and Amazon retail are very similar indeed.
10. One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins.
11. We want Prime to be such a good value, you’d be irresponsible not to be a member.
12. As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention
Why cloud IT providers generate profits by locking you in, and what to do about it - Part 2 of 2
In the first article of this series, I spoke about how and why cloud IT providers use lock-in. I'll briefly revisit this and then focus on strategies to maintain buyer power by minimizing lock-in.
In the first article of this series, I spoke about how and why cloud IT providers use lock-in. I'll briefly revisit this and then focus on strategies to maintain buyer power by minimizing lock-in.
If you asked Kevin O'Leary of ABC's Shark Tank about customer lock-in with cloud providers , he might say something like:
"You make the most MONEY by minimizing the cost of customer acquisition and maximizing total revenue per customer. COCA combined with lock-in lets them milk you like a cow."
In short, they want to make as much profit as possible. So what do you do about it?
1. Avoid proprietary resource formats where possible. For example, both AWS and VMware use proprietary virtual machine formats. Deploying applications on generic container technologies like Docker and Google’s Kubernetes means you’ll have a much easier time moving the next time Google drops price by 20%
2. Watch out for proprietary integration platforms like Boomi, IBM Cast Iron, and so on. The more work you do integrating your data and applications, the more you’re locked in to the integration platform. These are useful tools, but limit the scope each platform is used for and have a plan for how you might migrate off that platform in the future.
3. Use open source where it makes sense. Like Linux, Openstack, Hadoop, Apache Mesos, Apache Spark, Riak and others provide real value that helps companies develop a digital platform for innovation. The problem is that talent is a real problem with opens source and much of the tech is still immature. Companies like Cloudera can mitigate this but they have their own form of lock-in to watch out for in the form of “enterprise distributions”.
4. Don’t standardize across the enterprise on proprietary platforms like MSFT Azure. Period. But don’t be afraid to use proprietary platforms for specific high impact projects if you have significant in house talent that aligned to that platform – while building expertise around alternatives like Cloud Foundry.
5. Make sure your vendors and developers use standards based service oriented approach. Existing standards like JSON, WSDL, Openstack APIs and emerging standards like WADL, VMAN, CIMI should be supported to the extent possible for any technology you choose to adopt.