I have been scratching my head recently on software vendors policies regarding pricing and multi core pricing. The game is completely foolish and hopefully will reach its end soon, as more and more vendors will show up pricing per user to best serve the SaaS model of their customers, or pricing per instance (read JVM instance, or software server instance in a generic fashion).
Lets give it some background first and take the example of a small software vendor selling a distributed in-memory cache - aka Tangosol now owned by Oracle. Up until very recently before the Oracle acquisition, Tangosol had public price on its website, on a "per CPU" (in clear text) basis.
In August 2007 they shipped a minor version of their software (now owned by Oracle) and announced a new price of about twice more expensive. I dare to raise the issue and Cameron (Tangosol former CEO now at Oracle) kindly dare to answer it: Tangosol was priced per core and not per CPU despite advertised as per CPU on their web site... Due to Oracle policies regarding CPU and core pricing, they had to go thru this price change.
Read it here.
(no offense to Tangosol, it's great stuff and I am glad they clarified this).
Conclusion 1: Don't trust advertised per CPU prices. Anyone advertising per CPU price without disclosing complete details regarding CPU and core policies is lying to defer the price debate, and brings complexity in a TCO comparison if they are in a competitive situation.
Next step, let's recap on vendor policies regarding CPU and cores...
You already know that you cannot buy single core x86 (Intel/AMD) chips anymore, that the norm is dual core, and that quad core per chip (read per socket on the mother board, with 1 socket = 1 CPU) will be the default in a 6 months timeframe (go check on dell.com, you can already buy quad CPU quad core servers).
(disclaimer: this data is from searching in public sources):
For Oracle, you can read that each core accounts for either 0.5 or 0.75 CPU depending if it is a total dual core or quad core. So a double core costs 1.5 CPU units and a quad core costs 2 CPU units.
Read it here and here.
A quad CPU quad core costs 8 CPU units at Oracle.
For BEA, you can read that each core accounts for 0.25 CPUstarting at the 3rd core in a single CPU. So a dual core costs 1 CPU unit and a quad core costs 1.5 CPU units.
Read details here.
A quad CPU quad core costs 6 CPU units at BEA.
For IBM, they switched to a proprietary defined Processor Value Unit that is assumed to better assess various processor architectures. In fact this is quite similar to what Oracle does if you look here and on IBM site here.
A quad CPU quad core costs 8 CPU units at IBM.
For RedHat / JBoss (this is a subscription based business model but still), you can read half about it on RedHat site here and look at the JBoss TCO calculator here.
It works thru 4 CPU packs, but I could not find any information on what their CPU means... so let's exclude it for now...
All in all, you can get a 25% price difference just by considering this CPU unit. Oracle can tell you its stuff is 25% cheaper compared to BEA, you'll give them exactly the same amount of money in the end on a quad CPU quad core hardware.
Fair enough you said ??
Lets consider a few points:
- do the business project people and admins know how many cores total they will need to run on, currently run on?
- do they instead perfectly know how many server instances and apps they have to start stop operate and monitor?
- should your software license price be directly impacted if an application increases or decreases in overall throughput because business project owners asked this and that new feature to be added?
- did you already over-provisionned your hardware platforms to hide this fact?
- did you took time to ask the vendor policies regarding virtualisation? What if you turn the 4 CPU quad core box in 8 virtual 4 (virtual) CPU ""quad core""(they are virtualized now right?) system to maximize its utilization?
- do you have a virtualization or resource brokering project (read dynamic intelligent real time throw in on-demand adaptive capacity) and you are close to the day you'll click a button to move an app instance running on this old 2 CPU dual core to your new 4 CPU quad core hardware when it is on peak load (heard about VMotion and XenMotion already right?) and with no downtime. Did you thought on how you'll be able to optimize or even plan your per CPU license costs in such a scheme?
It's obvious a per instance model (per JVM instance, per server software instance) brings simple answers to all these questions and is perfectly aligned with what IT and project people are held accountable for running real world apps.
The problem for switchers (BEA, IBM, Oracle and all /CPU vendor) is that it will lead them to fairly complex discussions with their switching customers, and the best exit is likely to start selling a (time limited) enterprise wide (project limited) licenses that does not account anything anymore (until you realize it - hey it's a sell/buy game!). The pain the vendor put on you to manage and account all these licenses even becomes part of its value proposition.
... up until a billback system is put in place to enable a pay for what you used model fully aligned on your own business running costs and ROI targets.
Subscribe to:
Post Comments (Atom)
7 comments:
Alex - just to provide a little background, the "CPU" definition we used was analogous to what the JVM reported, so each core counted as a CPU.
Peace,
Cameron.
Yes I can imagine. This has been a common situation. Back in time most of vendors were charging an extra for dual core (BEA dropped it in 2005 - see http://news.zdnet.co.uk/hardware/0,1000000091,39214641,00.htm) . This beeing said we haven't talked about this: what turning on hyperthreading in the BIOS/OS would mean on what a JVM reports regarding the number of core ;-)
Hopefully things are changing in the correct direction.
per employee? Did Sun stop doing that?
So with a per JVM model, will you mind that I run my one JVM on a 256 core box with 60 gigs of memory?
Because it can happen:
http://www.azulsystems.com/
Not knocking your model, just asking. :)
Hi Hans
Azul is a compute appliance, that works with a remote normal OS based "proxy" JVM attached to it (hence their term "network attached processing"). You can perfectly run multiple "JVM" in an Azul appliance, and there are tools to manage resource allocation. So generally speaking one Azul appliance does not equal one JVM - and Azul happens to be a border case as well for per core pricing software vendors.
The point in pricing per JVM is not to derive it (or limit it) from a per core thinking. It is just natural because you usually know how many instances you are running, on which IP/port, and what they are doing (up/down/good/bad/log here and there etc).
Totally true that one Azul appliance doesn't mean one JVM. But running on an Azul appliance, one JVM can have a 300GB heap and access to hundreds of cores at the same time. In a properly threaded and scalable application, the only bound on the capability of a "single JVM" becomes the network. Note that this is a nontrivial bound, since all logging activity flows from the Azul appliance back to the proxy process on the server.
But a Java server vendor might be a bit disappointed to find that the single JVM license that they thought would be one of many, is now running on an Azul box and thus powering what is really a huge deployment.
So it's an edge case like you said, but an interesting one. I have no answer for this issue and I agree that per core and per CPU pricing results in all kinds of contradictions.
Nice article you got here. I'd like to read something more about this matter. Thnx for sharing this information.
The only thing I would like to see here is such photo or even two :)
Sexy Lady
Female Escorts London
Post a Comment