Posts Tagged IT

Best TCO—System z vs. x86 vs. Public Cloud

IBM recently analyzed various likely customer workload scenarios and found that the System z as an enterprise Linux server could consistently beat x86 machines and even public cloud providers like AWS in terms of TCO.  The analysis was reasonably evenhanded although, like automobile mileage ratings, your actual results may vary.

This blogger has long contended that the z Enterprise Linux Server acquired under the deeply discounted IBM System z Solution Edition program could beat comparable x86 systems not only in terms of TCO but even TCA. Algar, a Brazilian telecom, acquired its initial zEnterprise Linux server to consolidate a slew of x86 systems and lay a foundation for scalable growth. It reports cutting data center costs by 70%. Nationwide Insurance, no newcomer to mainframe computing, used the zEnterprise to consolidate Linux servers, achieving $46 million in savings.

The point: the latest IBM TCO analyses confirm what IBM has been saying. TCO advantage, IBM found, switches to the z Enterprise Linux Server at around 200 virtual servers compared to the public cloud and a bit more VMs compared to x86 machines. View the IBM z TCO presentation here.

IBM further advanced its cause in the TCO/TCA battle with the recent introduction of the IBM Enterprise Cloud System. This is a factory-built and integrated system—processor, memory, network, IFLs, virtualization management, cloud management, hypervisor, disk orchestration, Linux OS—priced (discounted) as a single solution. IBM promises to deliver it in 45 days and have it production ready within hours of hitting the organization’s loading dock. Of course, it comes with the scalability, availability, security, manageability, etc. long associated with the z, and IBM reports it can scale to 6000 VMs. Not sure how this compares in price to a Solution Edition Enterprise Linux Server.

The IBM TCO analysis compared the public cloud, x86 cloud, and the Enterprise Cloud System in terms power and space, labor, software/middleware, and hardware costs when running 48 diverse (a range of low, medium, and high I/O) workloads. In general it found an advantage for the z Enterprise Cloud System of 34-73%.  The z cost considerably more in terms of hardware but it more than made up for it in terms of software, labor, and power. Overall, the TCO examined more than 30 cost variables, ranging from blade/IFL/memory/storage amounts to hypervisor/cloud management/middleware maintenance.

In terms of hardware, the z included the Enterprise Linux Server, storage, z/VM, and IBM Wave for z/VM. Software included WebSphere Application Server middleware, Cloud Management Suite for z, and Tivoli for z/VM. The x86 cloud included HP hardware with a hypervisor, WebSphere Application Server, SmartCloud Orchestrator, SmartCloud Monitoring, and Tivoli Storage Manager EE. Both analyses included labor to manage both hardware and VMs, power and space costs, and SUSE Linux.

The public cloud assumptions were a little different. Each workload was deployed as a separate instance. The pricing model was for reserved instances. Hardware costs were based on instances in east US region with SUSE, EBS volume, data in/out, support (enterprise), free and reserved tier discounts applied. Software costs included WebSphere Application Server ND (middleware) costs for instances. A labor cost was included for managing instance.

When IBM applied its analysis to 398 I/O-diverse workloads the results were similar, 49-75% lower cost with the Cloud System on z. Again, z hardware was considerably more costly than either x86 or the public cloud. But z software and labor was far less than the others. In terms of 3-year TCO, the cloud was the highest at $37 M, x86 came in at $18.3 M, and the Cloud on z cost $9.4 M. With 48 workloads, the z again came in with lowest TCO at $1 M compared to $1.6 M for x86 systems, and $3.9 M for the public cloud.

IBM tried to keep the assumptions equivalent across the platforms. If you make different software or middleware choices or a different mix of high-mid-low I/O workloads your results will be different but the rankings probably won’t change all that much.

Also, there still is time to register for IBM Edge2014 in Las Vegas. This blogger will be there hanging around the bloggers lounge when not attending sessions. Please join me there.

Follow Alan Radding/BottomlineIT on Twitter: @mainframeblog

Advertisements

, , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

Change-proof Your Organization

Many organizations are being whiplashed by IT infrastructure change—costly, disruptive never changes that are hindering IT and the organization.  You know the drivers: demand for cloud computing, mobile, social, big data, real-time analytics, and collaboration. Don’t forget to add soaring transaction volumes, escalating amounts of data, 24x7x365 processing, new types of data, proliferating forms of storage, incessant compliance mandates, and more keep driving change. And there is no letup in sight.

IBM started to articulate this in a blog post, Infrastructure Matters. IBM was focusing on cloud and data, but the issues go even further. It is really about change-proofing, not just IT but the business itself.

All of these trends put great pressure on the organization, which forces IT to repeatedly tweak the infrastructure or otherwise revamp systems. This is costly and disruptive not just to IT but to the organization.

In short, you need to change-proof your IT infrastructure and your organization.  And you have to do it economically and in a way you can efficiently sustain over time. The trick is to leverage some of the very same  technology trends creating change to design an IT infrastructure that can smoothly accommodate changes both known and unknown. Many of these we have discussed in BottomlineIT previously:

  • Cloud computing
  • Virtualization
  • Software defined everything
  • Open standards
  • Open APIs
  • Hybrid computing
  • Embedded intelligence

These technologies will allow you to change your infrastructure at will, changing your systems in any variety of ways, often with just a few clicks or tweaks to code.  In the process, you can eliminate vendor lock-in and obsolete, rigid hardware and software that has distorted your IT budget, constrained your options, and increased your risks.

Let’s start by looking at just the first three listed above. As noted above, all of these have been discussed in BottomlineIT and you can be sure they will come up again.

You probably are using aspects of cloud computing to one extent or another. There are numerous benefits to cloud computing but for the purposes of infrastructure change-proofing only three matter:  1) the ability to access IT resources on demand, 2) the ability to change and remove those resources as needed, and 3) flexible pricing models that eliminate the upfront capital investment in favor of paying for resources as you use them.

Yes, there are drawbacks to cloud computing. Security remains a concern although increasingly it is becoming just another manageable risk. Service delivery reliability remains a concern although this too is a manageable risk as organizations learn to work with multiple service providers and arrange for multiple links and access points to those providers.

Virtualization remains the foundational technology behind the cloud. Virtualization makes it possible to deploy multiple images of systems and applications quickly and easily as needed, often in response to widely varying levels of service demand.

Software defined everything also makes extensive use of virtualization. It inserts a virtualization layer between the applications and the underlying infrastructure hardware.  Through this layer the organization gains programmatic control of the software defined components. Most frequently we hear about software defined networks that you can control, manage, and reconfigure through software running on a console regardless of which networking equipment is in place.  Software defined storage gives you similar control over storage, again generally independent of the underlying storage array or device.

All these technologies exist today at different stages of maturity. Start planning how to use them to take control of IT infrastructure change. The world keeps changing and the IT infrastructures of many enterprises are groaning under the pressure. Change-proofing your IT infrastructure is your best chance of keeping up.

, , , , , , , , , , , , , , ,

1 Comment

Bitcoin Means for Micro-Transactions for IT

Bitcoin may be the world’s newest currency. If not, it is certainly the most unconventional. But, it is catching on. As Reuters wrote recently: “Venture capitalists show no sign of shying away from investing in startups related to Bitcoin.”

Think of Bitcoin as electronic money, or maybe virtual money since no government backs it or controls it. Yet, businesses already are doing business with bitcoins. According to Reuters, there are 11.7 million bitcoins in circulation, with a market capitalization of over $1.7 billion. The price (value) fluctuates, but so does the value of conventional currencies although bitcoin fluctuations may be less well understood.

Wikipedia describes Bitcoin as a cryptocurrency—a type of currency that relies on cryptography to create and manage the currency—specifically, the creation and transfer of bitcoins is based on an open- source cryptographic protocol that is independent of any central authority.  Bitcoins can be transferred through a computer or smartphone without involving an intermediate financial institution. The concept was introduced in a 2008 as a peer-to-peer (P2P), electronic cash system.

For IT, Bitcoin promises to the way financial transactions, especially very small (micro) transactions, can be conducted fast and securely with little or no overhead.  Today, about the best you can do is PayPal, but with a slew of middlemen it is not very efficient when it comes to micro transactions.

A product or service selling at a micro price today isn’t really feasible from either an IT perspective or a financial perspective. But, with Bitcoin it might be since it removes a lot of financial and technical overhead.

The same big name investors that invested in Facebook Inc, Twitter, Groupon Inc, and Founders Fund, which includes three founders of PayPal, are putting serious money into Bitcoin investments even though the currency exists solely in cyber form. Proponents see it as the future of money, and in some investing circles, according to Reuters, it has created a buzz reminiscent of the early Internet.

For IT, Bitcoin may be the currency you will need as the global digital economy ramps up big. The benefits on bitcoins or something like it may be tremendous.  For starters, Bitcoin appears to address the problem of micro-transaction payments, where the cost of processing a credit or debit card transaction greatly exceeds the value of the transaction.  If you can do a lot of micro-transactions at almost no cost, the payback adds up.  The value of, say, 10 million half-cent transactions adds up to real money.

Then there is what Bitcoin itself says about the product.  For example, Bitcoin’s high cryptographic security allows it to process transactions in a very efficient and inexpensive way. You can make and receive payments using the Bitcoin network with almost no fees.

Furthermore, any business that accepts credit card or PayPal payments knows the problem of payments that are later reversed because the sender’s account was hacked or they fraudulently claimed non-delivery. The only way businesses can defend themselves against this kind of fraud is with complex risk analysis and increased prices to cover the losses. Bitcoin payments are irreversible and wallets can be kept highly secure, meaning that the cost of theft is no longer pushed onto the shoulders of the merchants.

Accepting credit cards online typically requires extensive security checks in order to comply with PCI compliance. Bitcoin security, however, makes this approach obsolete. Your payments are secured by the network and not at your expense. OK, maybe that is not completely reassuring, but it is as good as or better than you have now.

Finally, there is what Bitcoin calls accounting transparency. Many organizations are required to produce accounting documents about their activity and to adopt good transparency practices. Bitcoin allows you to offer the highest level of transparency since you can provide the detailed information you use to verify your balances and transactions.

OK, it isn’t perfect, but when Europe was precariously balanced on the edge of insolvency and countries like Greece, Cyprus, Italy, and Spain were in grave financial danger interest in bitcoins apparently soared and their value rose dramatically. Bloomberg Businessweek reported that Spaniards apparently were active buyers of bitcoins during the crisis, viewing the currency as a safe hedge against their own government seizing bank accounts and savaging their own conventional currency.

Maybe the most important thing to say about Bitcoin is that it is the future as the digital economy ramps up to rival the conventional economy. As users all over the world turn to smartphones for online commerce, IT will need something like Bitcoin. Besides, you don’t want some all-powerful government dictating even more regulations and issuing compliance mandates. Several governments are skeptical, to say the least, about the idea of Bitcoin but none apparently have shut it down.  As a P2P technology, Bitcoin is governed by the people that ultimately use it, maybe that will even be your own organization, and not by Big Brother.

, , , , , , , , , , , ,

Leave a comment

Lessons from IBM Eagle TCO Analyses

A company running an obsolete z890 mainframe with what amounted to 0.88 processors (332 MIPS) planned a migration to a distributed system consisted of 36 distributed UNIX servers. The production workload consisted of applications, database, testing, development, security, and more.  Five years later, the company was running the same in the 36-server, multi-core (41x more cores than the z890) distributed environment except that its 4-year TCO went from $4.9 million to $17.9 million based on an IBM Eagle study.  The lesson, the Eagle team notes: cores drive platform costs in distributed systems.

Then there is the case of a 3500 MIPS mainframe shop that budgeted $10 million for a 1-year migration to a distributed environment. Eighteen months into the project, now six months behind schedule, the company had spent $25 million and only managed to offload 350 MIPS. In addition, it had to increase staff to cover the  over-run, implement steps to replace mainframe automation, had to acquire additional distributed capacity over the initial prediction (to support only 10% of total MIPS offloaded), and had to extend the period of running the old and the new systems in parallel at even more cost due to the schedule overrun. Not surprisingly, the executive sponsor is gone.

If the goal of a migration to the distributed environment is cost savings, the IBM Eagle team has concluded after 3 years of doing such analyses, most migrations are a failure. Read the Eagle FAQ here.

The Eagle TCO team was formed in 2007 and since then reports completing over 300 user studies.  Often its studies are used to determine the best platform among IBM’s various choices for a given set of workloads, usually as part of a Fit for Purpose. In other cases, the Eagle analysis is aimed at enabling a System z shop to avoid a migration to a distributed platform. The Eagle team, in fact, is platform agnostic until it completes its quantitative analysis, when the resulting numbers generally make the decisions clear.

Along the way, the Eagle team has learned a few lessons.  For example:  re-hosting projects tend to be larger than anticipated. The typical one-year projection will likely turn into a two- or three-year project.

The Eagle team also offers the following tips, which can help existing shops that aren’t necessarily looking to migrate but just want to minimize costs:

  • Update hardware and software; new systems generally are more cost-efficient. For example one bank upgraded from z/OS 1.6 to 1.8 and reduced each LPAR’s MIPS by 5% (monthly software cost savings paid for the upgrade almost immediately)
  • Schedule workloads to take advantage of sub-capacity software pricing for platforms that offer it, which may produce free workloads
  • Consolidate workloads on Linux, which invariably saves money, especially when consolidating many Linux virtual servers on a mainframe IFL. (A recent debate raged on LinkedIn focused on how many virtual instances can run on an IFL with some suggesting a max of 20. The official IBM figure:  you can consolidate up to 60 distributed cores or more on a single System z core; a single System z core = an IFL.)
  • Changing the database can impact capacity requirements and therefore costs, resulting in lower hardware and software costs
  • Consider the  IBM mainframe Solution Edition program, which is the best mainframe deal going, enabling you to acquire a new mainframe for workloads you’ve never run on a mainframe for a deeply discounted package price including hardware, software, middleware, and 3 years of maintenance.

 BottomlineIT generally is skeptical of TCO analyses from vendors. To be useful the analysis needs to include full context, technical details (components, release levels, and prices), and specific quantified benchmark results.  In addition, there are soft costs that must be considered.  Eagle analyses generally do that.

In the end, the lowest acquisition cost or even the lowest TCO isn’t necessarily the best platform choice for a given situation or workload. Determining the right platform requires both quantifiable analysis and judgment.

, , , , , , , , , ,

Leave a comment

Mainframe Workload Economics

IBM never claims that every workload is suitable for the zEnterprise. The company prefers to talk about platform issues in terms of fit-for-purpose or tuned-to-the-task. With the advent of hybrid computing, the low cost z114, and now the expected low cost version of the zEC12 later this year, however, you could make a case for any workload that benefits from the reliability, security, and efficiency of the zEnterprise mainframe is fair game.

John Shedletsky, VP, IBM Competitive Project Office, did not try to make that case. To the contrary, earlier this week he presented the business case for five workloads that are optimum economically and technically on the zEnterprise.  They are:  transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform. None of these should be a surprise; possibly with the exception of analytics and consolidated platform they represent traditional mainframe workloads.  BottomlineIT covered Shedletsky’s mainframe cost/workload analysis last year here.

This comes at a time when IBM has started making a lot of noise about new and different workloads on the zEnterprise. Doug Balog, head of IBM System z mainframe group, for example, was quoted widely in the press earlier this month talking about bringing mobile computing workloads to the z. Says Balog in Midsize Insider: “I see there’s a trend in the market we haven’t directly connected to z yet, and that’s this mobile-social platform.”

Actually, this isn’t even all that new either. BottomlineIT’s sister blog, DancingDinosaur, was writing about organizations using SOA to connect CICS apps running on the z to users with mobile devices a few years ago here.

What Shedletsky really demonstrated this week was the cost-efficiency of the zEC12.  In one example he compared a single workload, app production/dev/test running on a 16x, 32-way HP Superdome and an 8x, 48-way Superdome with a zEC12 41-way. The zEC12 delivered the best price/performance by far, $111 million (5yr TCA) for the zEC12 vs. $176 million (5yr TCA) for the two Superdomes.

When running Linux on z workloads with the zEC12 compared to 3 Oracle database workloads (Oracle Enterprise Edition, Oracle RAC, 4 server nodes per cluster) supporting 18K transactions/sec.  running on 12 HP DL580 servers (192 cores) the HP system priced out at $13.2 million (3yr TCA) compared to a zEC12 running 3 Oracle RAC clusters (4 nodes per cluster, each as a Linux guest) with 27 IFLs that priced out at $5.7 million (3yr TCA). The zEC12 came in at less than half the cost.

With analytics such a hot topic these days Shedletsky also presented a comparison of the zEnterprise Analytics System 9700 (zEC12, DB2 v10, z/OS, 1 general processor, 1 zIIP) and an IDAA with a current Teradata machine. The result: the Teradata cost $330K/queries per hour compared to $10K/queries per hour.  Workload time for the Teradata was 1,591 seconds for 9.05 queries per hour compared to 60.98 seconds and 236 queries per hour on the zEC12. The Teradata total cost was $2.9 million compared to $2.3 million for the zEC12.

None of these are what you would consider new workloads, and Shedletsky has yet to apply his cost analysis to mobile or social business workloads. However, the results shouldn’t be much different. Mobile applications, particularly mobile banking and other mobile transaction-oriented applications, will play right into the zEC12 strengths, especially when they are accessing CICS on the back end.

While transaction processing, critical data workloads, batch processing, co-located business analytics, and consolidation-on-one-platform remain the sweet spot for the zEC12, Balog can continue to make his case for mobile and social business on the z. Maybe in the next set of Shedletsky comparative analyses we’ll see some of those workloads come up.

For social business the use cases aren’t quite clear yet. One use case that is emerging, however, is social business big data analytics. Now you can apply the zEC12 to the analytics processing part at least and the efficiencies should be similar.

, , , , , , , , , , , , ,

Leave a comment

Five Questions That Set the Stage for Cloud Deployment

Cloud computing clearly is gaining traction but leaving CIOs a little confused about the role of IT in any cloud strategy. Yet, a September study conducted by the Open Data Center Alliance (ODCA), found that organizations are embracing the cloud at a 15% faster rate than previously forecast.

Of course, you expect members of a leading technology group like ODCA to be early adopters. But what if your client isn’t an early adopter? Deloitte offers a cloud readiness survey to identify inhibitors to cloud adoption here. Capgemini also has a cloud readiness approach here.

Even after management understands what it wants to achieve with cloud computing, the organization still may not be ready. Whatever their interest in cloud computing IDC identifies five questions you should ask them that sets the stage for any cloud deployment.

The questions below are intended to help you guide your organization to what is required of it to effectively leverage cloud computing.  Notice how its relationship with IT plays a central role, as it should. BottomlineIT expects cloud options to become just another part of the standard IT capabilities set and will be used to a different extent by almost every organization

1.   To What Extent Are IT Resources Used to Support Company Objectives? This gets to the issue of whether IT is an integral part of the company’s strategic thinking. IDC found that almost 60% of midsize firms agree strongly that advanced technology is an important competitive tool when used as a strategic resource. If IT is integral then cloud computing can become a competitive differentiator. The cloud also represents an attractive option for companies that view technology as a way to save money.  Although the immediate benefit of cloud technology will be tactical cost and deployment advantages, longer term the strategic implications of cloud capabilities will be even more valuable.

2.   How Physically Complicated Is the Company? The number of company locations supported by the current IT infrastructure will be important to consider in any cloud computing implementation. On average, midsize firms have 6.4 locations, with IT staff typically based at headquarters. This can complicate general maintenance and installation of new software and upgrades. With cloud-based software all users run the latest version of hosted applications, simplifying support for multiple locations. In effect, the more locations you have and the more diverse your IT environment, the more the cloud can do for you in coordinating and managing application deployment.

3.  What Is the Company’s Pace of Organizational Evolution? How Much Change Is Under Way? The cloud can provide access across the organization to a central set of rationalized technology offerings. While these are easier to manage than multiple legacy approaches, the real benefits come from improvements in worker cooperation and collaboration. And don’t overlook future M&A activities. The integration of IT resources is not among the top concerns in an acquisition until a deal is completed. But then its impact can emerge in very unpleasant ways. For firms undergoing major change cloud engagements today can set the stage for improved organizational flexibility tomorrow.  Cloud technology, similarly can facilitate change, allowing midsize companies to add or test new applications or processes without having to expand their IT infrastructure. It also can make enterprise applications available to midmarket companies in an affordable way.

4.   How Are Mobile Workers Supported, and Could They Benefit from Access to Cloud-Based Resources? Enhancing worker productivity is a key reason for expanding technology investment and providing access to advanced networking capabilities via the cloud to achieve anytime, anyplace resource access. From an ROI perspective, the mobile worker case for cloud computing can be intuitively compelling, especially if it improves the sales close rate or speeds on-boarding new customers. IDC suggests that even a 5% improvement could translate into an effective financial justification for cloud computing investment.

5.   What External Forces Are Encouraging/Discouraging Cloud Computing Adoption?  A changing competitive landscape as well as the regulatory environment can provide strong incentives or disincentives for the adoption of cloud computing. Note that external forces will continue to be in a state of flux as cloud computing becomes more widespread. If external forces are discouraging cloud adoption, plan to revisit those attitudes regularly because the increasing adoption and evolution of cloud computing changes attitudes fast.

There also are 3 reasons NOT to adopt cloud computing now: security/compliance, latency, availability. A subsequent post will elaborate on these.

, , , , , , , , ,

1 Comment

Winning the Coming Talent War Mainframe Style

The next frontier in the ongoing talent war, according to McKinsey, will be deep analytics, a critical weapon required to probe big data in the competition underpinning new waves of productivity, growth, and innovation. Are you ready to compete and win in this technical talent war?

Similarly, Information Week contends that data expertise is called for to take advantage of data mining, text mining, forecasting, and machine learning techniques. As it turns out the mainframe is ideally is ideally positioned to win if you can attract the right talent.

Finding, hiring, and keeping good talent within the technology realm is the number one concern cited by 41% of senior executives, hiring managers, and team leaders responding to the latest Harris Allied Tech Hiring and Retention Survey. Retention of existing talent was the next biggest concern, cited by 19.1%.

This past fall, CA published the results of its latest mainframe survey that came to similar conclusions. It found three major trends on the current and future role of the mainframe:

  1. The mainframe is playing an increasingly strategic role in managing the evolving needs of the enterprise
  2. The mainframe as an enabler of innovation as big data and cloud computing transform the face of enterprise IT
  3. Demand for tech talent with cross-disciplinary skills to fill critical mainframe workforce needs in this new view of enterprise IT

Among the respondents to the CA survey, 76% of global respondents believe their organizations will face a shortage of mainframe skills in the future, yet almost all respondents, 98%, felt their organizations were moderately or highly prepared to ensure the continuity of their mainframe workforce. In contrast, only 8% indicated having great difficulty finding qualified mainframe talent while 61% reported having some difficulty in doing so.

The Harris survey was conducted in September and October 2012. Its message is clear: Don’t be fooled by the national unemployment figures, currently hovering above 8%.  “In the technology space in particular, concerns over the ability to attract game-changing talent has become institutional and are keeping all levels of management awake at night,” notes Harris Allied Managing Director Kathy Harris.

The reason, as suggested in recent IBM studies, is that success with critical new technologies around big data, analytics, cloud computing, social business, virtualization, and mobile increasingly are giving top performing organizations their competitive advantage. The lingering recession, however, has taken its toll; unless your data center has been charged to proactively keep up, it probably is saddled with 5-year old skills at best; 10-year old skills more likely.

The Harris study picked up on this. When asking respondents the primary reason they thought people left their organization, 20% said people left for more exciting job opportunities or the chance to get their hands on some hot new technology.

Some companies recognize the problem and belatedly are trying to get back into the tech talent race. As Harris found when asking about what companies are doing to attract this kind of top talent 38% said they now were offering great opportunities for career growth. Others, 28%, were offering opportunities for professional development to recruit top tech pros. A fewer number, 24.5%, were offering competitive compensation packages while fewer still, 9%, offering competitive benefits packages.

To retain the top tech talent they already had 33.6% were offering opportunities for professional development, the single most important strategy they leveraged to retain employees. Others, 24.5%, offered opportunities for career advancement while 23.6% offered competitive salaries. Still a few hoped a telecommuting option or competitive bonuses would do the trick.

Clearly mainframe shops, like IT in general, are facing a transition as Linux, Java, SOA, cloud computing, analytics, big data, mobile, and social play increasing roles in the organization and the mainframe gains the capabilities to play in all these arenas. Advanced mainframe skills like CICS are great but it’s just a start. You also need Rest, Hadoop, and a slew of mobile, cloud, and data management skill sets.  At the same time, hybrid systems and expert integrated systems like IBM PureSystems and zEnterprise/zBX give shops the ability to tap a broader array of tech talent while baking in much of the expertise required.

, , , , , , , , , , , , ,

Leave a comment