Welcome to EMC Consulting Blogs Sign in | Join | Help

Jas Dhalliwal's Blog

  • Time to Consider Scale Up in Virtualized Environments?

    Imported from http://omegacloud.typepad.com/ published Dec 2 2011

    The recent announcement from AMD of their 16-core Opteron 6200 CPU and indeed Intel with their 10-core Xeon E7 indicates a resurgence of the scale up mentality. Indeed the virtualization bandwagon is partially responsible for fueling this rise.

    While on the one hand we have every virtualization vendor touting server consolidation and datacenter efficiency using simple scale-out models based on x86 technology, it is also apparent that to get full efficiency, the density of virtual machines to physical hosts (VM:hypervisor host) needs to increase.

    Pack in licensing and transformation costs, and it becomes increasingly difficult to create business cases that really make sense for an organization to invest in new hardware to take the best out of virtualization and ultimately the cloud - unless that very high density can be achieved.......

    Read entire blog at Time to Consider Scale Up in Virtualized Environments? 

  • Why Cores STILL Matter - Continuous Workload Consolidation

    Imported from http://omegacloud.typepad.com/ published Nov 16 2011

    In an earlier post related to targeted CPU power I talked about the emerging power of the x86 platform and its ability to handle even the most demanding workloads in a fully virtualized environment.

    Recently, AMD announced that its long awaited 16-core (real cores - no hyper-threading) chip, codenamed "Interlagos" is finally available as the Opteron 6200. This is significant for the industry, not just in the speeds and feeds area, but also in really stepping up to the table in dealing with enterprise virtualization and Cloud needs in particular.

    The ability to continuously add computing power within ever tighter thermal envelopes using the same processor socket, and wrap that in enough intelligence for dealing with service provider issues such as energy management is critical. This certainly follows the idea of upgrading that underlying hardware for a doubling in capability.

    The fact that the major virtualization players already support AMD Opteron 6200 out of the box is great. This allows that forward planning exercise to be done when planning the computing base for your cloud.

    For new entrants, those that held out on full blown cloud infrastructures, this very capable compute platform provides a means of entering rapidly and with power. The increasing core count is providing a market mechanism of reducing barriers to entry - hence potentially more competitors and potentially more choice!......

    Read entire blog at Why Cores STILL Matter - Continuous Workload Consolidation

  • A Resurgent SPARC platform for Enterprise Cloud Workloads

    Imported from http://omegacloud.typepad.com/ published Nov 8 2011

    Fujitsu has just announced that they have taken the crown in Supercomputer performance breaking past the 10 petaflop barrier. That is over 10 quadrillion operations a second. Seriously fast.

    Just when we thought that Intel/AMD and x86 would take over the worldWink this beauty came along. For those interested in the speeds and feeds of the Kei Supercomputer - 22,032 four-socket blade servers in 864 server racks with a total of 705,024 cores!

    This is a Supercomputer with specific workload profiles running on there. However, looking at the scale of the infrastructure involved, we are basically looking at multiple large scale Internet Cloud providers literally in this construct.

    Traditional Cloud providers may well find themselves with a new competitor, the HPC Supercomputer crowd. Supercomputer are expensive to run, but they have all the connectivity and datacenter facilities that one needs......

    Read entire blog at A Resurgent SPARC platform for Enterprise Cloud Workloads

  • Cloud Security Maneuvers - Governments taking Proactive Role

    Imported from http://omegacloud.typepad.com/ published Nov 4 2011

    In a previous blog entitled VMworld 2011 - Practice Makes Perfect (Security), I discussed the notion of preparing actively for attack in cyberspace through readiness measures and mock maneuvers.

    This is happening at the level of nations. ENISA in Cyber Atlantic 2011, shows how large groups/blocs of nations are working on not only increasing their capabilities, but practicing in concert to see how global threats can be prevented or isolated in cyberspace.

    This is at least as intensive as a NATO exercise. Languages, cultures, varying capabilities, synchronization of Command & Control capabilities as well as reporting and management at national levels.

    APTs (Advanced Persistent Threats) will be the target in this exercise......

    Read entire blog at Cloud Security Maneuvers - Governments taking Proactive Role

  • Size Matters - Micro Clouds and Engineered Systems

    Imported from http://omegacloud.typepad.com/ published Nov 1 2011

    In a number of blogs in the past I chart the emerging growth in capabilities of infrastructure components that most people take for granted. The reason for doing this is to continue to highlight that old design rules may well need to move out of the way to pave the way for the new.

    Although Intel and AMD continue to release roadmaps for processors with baked-in virtualization in silicon, the entire market is moving towards scale out models to populate their Cloud infrastructures. The customers are voting with their wallets, and proprietary systems are gradually being pushed out.

    Looking at the Dell site the other day, I saw the new Dell PowerEdge R815 equipped with AMD processors. It sports 48 cores within a 2U footprint. This is really incredible - and 8 more cores than Intel currentlyWink. They go on further to state that they have whitepapers that compare multiple 2U units from HP (DL 380 G7) and that they have more capability at lower operating costs.

    These type of messages are sweeping the industry currently. However, this would indicate that scale up as a strategy is on the rise again - after all it is easier to manage a single physical server than two servers right?....

    Read entire blog at Size Matters - Micro Clouds and Engineered Systems

  • VMworld 2011 - Practice Makes Perfect (Security)

    Imported from http://omegacloud.typepad.com/ published Oct 31 2011

    During the VMworld2011 conference, the theme of security came up very strongly. Indeed, there were many parallels to the RSA Conference 2011 in Feb/2011 that echoed concerns about "putting all your eggs in one basket".

    Many solutions were presented, new innovations from VMware in the form of the vShield family and vertical integration into the RSA enVision tools. However, tools are good, but there are few substitutes for common sense and training.

    Within all the sessions, I did not really see anything indicating how indepth Cloud security was to be achieved. Security certifications are mainly focused at awareness of issues pertaining to this theme and some level of descriptive and prescriptive actioning that can be performed within a framework.

    Taking an metaphor linked to security, namely defending a country, there are parallels that can be drawn. Typically an army of some sort (SecOps - Security Operations) incorporates the capabilities of the security force, a command and control center for operations (SOC - Security Operations Center).

    The army receives training both general and specific for particular engagement types (Security awareness training, Security tool training, System administration tasks such as patching, general awareness of threat levels around the world in cybersecurity terms). The army stays fit and in shape to respond should they be called into action. The army is distributed to ensure response in the correct measure and correct location (layered security distributed throughout a Cloud environment)........

    Read entire blog at VMworld 2011 - Practice Makes Perfect (Security) 

     
  • VMworld 2011 - 101 for Newbies

    Imported from http://omegacloud.typepad.com/ published Sep 30 2011

    The VMworld 2011 conference in Las Vegas in September was a really cool event. Lots of new stuff to see, lots of customers to talk to and of course lots of presentations!

    There are plenty of blogs out there that covered the conference and particular technologies and sessions. However, I found the VMworld Newbies material somewhat sparse.

    This being my first VMworld attendance, there were a couple of points that I wanted to spell out for future VMworld Newbies:

    1. Being 40°C in Vegas – take plenty of water with you at all times!
    2. Having taken lots of water, it needs eventually to leave the body – the call of nature. Apply some basic business intelligence to locate a toilet:
      1. Know where all the toilets in the conference center are
      2. Look at sessions and know on which floors there are sparse sessions
      3. Run like hell to that sparsely populated floor – relief!
    3. There are packets of nuts/crisps/fruit on the floors at regular intervals – grab some as reserves that can be consumed in the session presentations
      1. You will need this – the brain is working overtime after all to suck all that information in
    4. In the session rooms:
      1. Find a seat – conveniently where you can read from the screens as well as be able to leave innocuously (aisle is best)
      2. The rooms are cold, you are not moving too much -> be prepared for the call of nature
    5. Session selection - this is the big one!
      1. VMware logic says select a session at any particular time and you will be so happy that you will stay in the room for the full duration of the session – I don’t think so!
      2. My plan
        1. Select primary session of interest if still room available (most popular sessions were rerun in any case)
        2. Select second and third sessions per session window paying particular attention to the locations (which room and which floor)
        3. Turn up to the primary session -> if rubbish/boring/irrelevant/etc then out you go to the second selected session
        4. If the primary was too full, go to the second session (stay if good session) otherwise out you go and to the primary session -> there is always room to attend
      3. The third session is if you are following a strategy of sampling the material in each session
        1. The first 10 minutes of a session are used for proclaiming the disclaimer that is mandatory to show on each slide deck
        2. The last 10 minutes of a session are typically useless (although in some sessions there was good question-answer at end – but rare)
        3. That just leaves the middle bit of possible interest
        4. The third session lets you basically attend 3 different sessions to get the feel and orientation of the session
        5. Result -> attend 3 sessions for the price of one!


    Regarding the sessions. It is really not easy delivering these - my hat off to every single one of the presentersCool! However, based on the varying level of focus and expertise in the subject material, it is simply necessary to dive out (or in) based on your own personal evaluation of the material.

    Knowing the floor layout and where common session rooms (for your selection) really saves some time. There are other tips but these ones I used every single day to my personal benefit. The evening review of the material helped take it in.

    Remember it is a fun action-packed technology-infused event over many days and it pays to stay fit enough to see it through. I saw many attendees fading at the end – although that may well have been because of the event partiesWink

     

    So here comes VMworld 2012 already knocking on the door!....

    Disclaimer

    The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.
  • Moving Out to OmegaCloud

    Hi Folks,

    Wanting to focus a little more on blogging and in particular all things cloud related (and wanting to stay relatively neutral on the subject ;-) I have decided to move my blogs out to my new blog called Omega Cloud. I will continue to also post here in parallel, but there may be a small time lag.

    Thanks again to all those who continue to read my blog - when I get around to posting!

    See you on OmegaCloud!

     

     

     

     

     

     

  • VMware VCAP-DCA: Manage for Performance Course

    A couple of weeks back, I attended the VMware vSphere: Manage for Performance course. This is a recommended course for VCAP-DCA certification – the full title is (deep breath) – ‘VMware Certified Advanced Professional - Datacenter Certified Administrator’.

    I just wanted to give a heads-up to all those others that have spent the last year focusing on and sharing experience in relation to the VCAP certification process to show their great prowess in all things VMwareWink

    I was somewhat skeptical of a performance oriented course that lasted only 3 days. Typically in these new courses there seems to be a lot of “discussion”. However, for that really to succeed, the course attendees need to have a level of relatively intensive medium-large scale virtualization experience. It is critical that the trainer is someone that is engaged enough and motivated enough to perform the required level setting in the group.

    Our trainer really managed to do that – no easy task, believe me! However, the guys at QA-IQ know their stuff and bring it across in a really interactive style (a big thanks to the trainer Nel Reinhart! who also shared great insight into rugby – made me finally watch the Clint Eastwood/Morgan Freeman/Matt Damon Invictus film – excellent by the way!).

    Well to the course itself. There was actually a fair amount of hands-on for a change. The VCAP-DCD was just a tad heavy in case study and discussion material. These courses are expensive, and I believe that every attendee wants to get the most out of the material and be able to use it directly in their daily work environment.

    What I personally liked about this VCAP-DCA Performance course was that for one of the first times, the terminology has been standardized – particularly in the areas of memory, virtual memory and all things paging related. This was a major headache as when one speaks with administrators at client sites or even VMware personnel for that matter. There is a very large discrepancy between the words used, what was intended and the actual definitions of words used.

    There were also some clearer guidelines regarding the word “overhead” in creating virtual machines and best practices that were actually pretty good.

    One of the areas that we had an intensive discussion in the course about was regarding technology convergence vectors and their effect on “best practices”. As I principally work in the high-end large scale environments, I get to see a lot of the cutting edge without getting too bogged down in the “techno-babble” that virtualization discussions sometimes lead to. You know what I mean – are 2x vCPU better than 1x vCPU, more memory or less, which guest operating system to use….Crying

    In the course we discussed some of the areas that are driving deep virtualization and continuous consolidation. We discussed the number of VMs that an ESX server can host, and the best practices for that. However, on the technology side we are seeing massive increases in say network bandwidth – 10GbE in the mainstream, and 40/100GbE technologies already out there at the bleeding edge. The old notion of there is not enough network bandwidth is starting to disappear – leading to revisions of the consolidation ratios practically achieved by customers!

    Naturally, in actual exam and training course material this needs to be matched with the exam requirements – you are there to pass the exams after allYes. However, I think that training courses are sometimes a more intimate circle where new technology, best practices in the market, challenges, issues, and things that plain “just don’t work” can be discussed in confidence without revealing any customer names (very important) or secret-competitive “things”.

    That type of discussion may well be the real value of such courses – requiring techies to be receptive to that type of information - as increasingly competitive pressures are resulting in fewer customer references, less information sharing in an industry, and competitive “virtualization” advantage being fiercely guarded!

    From an EMC Consulting point of view, it is great to be able to help educate-up administrators on the rationale of virtualization beyond simple consolidation. The aspirations of their businesses are discussed, as well as where they see areas of improvement from a practical administrative point for the virtualization technologies.

    Correspondingly, the career administrators in such courses do a great job of educating consultants by highlighting how and why things go wrong on the ground as well as the relationship management improvements needed to ensure distillation of the precious knowledge these administrators have and not simply dismissing their ideas. Additionally, always great to hear how they are all uniquely addressing bulk administrative duties and automation in the context of their own unique datacenter eco-systems

    In any case, to all those VCAP-DCA’ers, keep at it and good luck when you do your exam! Right – back to the training materials……

  • Exchange Server 2010 – Keep In-house or BPOS?

    Exchange Server 2010 (E2K10) continues to be a baffling phenomenon. We have a raft of features to make E2K10 easier to manage when running at large scale. Integration with the Microsoft ecosystem (ADS, SharePoint, Outlook/Office) is of course excellent. We even hear with the use of the new DAG (replicated mailbox databases) that backups are a thing of the past – don’t need to backup E2K10 apparently.

    I have worked with all versions of Exchange, including when it was still MS Mail 3.x. During the years I have been a big fan of this messaging system – mainly due to its integration ability and use of database technology (well JET actually) to provide many features that were novel at the time of introduction. E2K10 is no exception. However, with the rise of the Cloud, and services such as Microsoft’s Business Productivity Online Suite - BPOS or other hosted Exchange service providers, the messaging is becoming increasingly unclear.

    What do I mean with that last statement? Well, take a look at the short list of things I hear regularly with clients:

    • Don’t need to backup E2K10 – Microsoft told me so
    • Don’t need fast disks anymore – Microsoft told me so
    • Messaging is not core to my business – will outsource it
    • What added value does the internal IT provide that the Cloud offerings of Exchange cannot provide?
    • Cheaper to host Exchange mailboxes with Microsoft BPOS or another Service Provider

    Well having seen in many large organizations what happens when the eMail service is not available, I would argue that messaging services are critical. Indeed, the more integration one has with other upstream applications that utilize eMail, the greater the dependency on a reliable environment. This would indicate that messaging services are core to the business, and indeed may be tightly linked to new service offerings.

    The idea of not backing up data, while certainly very attractive, is a little off the mark. There are other reasons for backing up data than simply to cover the “in case” scenario, including compliance, single-item recovery, litigation amongst others that require some idea of preservation of historic point-in-time copies of the messaging environment.

    However, the last points regarding cost, and being more effective to host Exchange with Microsoft directly. Well this is really a bit of a sensitive topic for most administrators and indeed organizations. One of the reasons that Exchange is expensive is that it simply could not in an easy fashion cover the needs of the organization in terms of ease of administration, scalability, infrastructure needs, reliability and indeed cost. It does seem to me that Microsoft itself may well be partially responsible for the “high cost” of messaging services.

    Why is this Relevant for Virtualization and the Cloud?

    Well, many of the cost elements regarding Exchange environments in particular related to the enormous number of dedicated servers that were required to host the various Exchange server roles. The I/O profile of the messaging service was also not very conducive to using anything less than high-end disks in performance oriented RAID groups.

    Administration for bulk activities such as moving mailboxes, renaming servers/organizations, backup/restore and virus scanning were not particularly effective to say the least.

    Don’t get me wrong, Exchange 2010 is a massive improvement over previous versions. I would put it akin to the change from Exchange 5.5 to Exchange 2000. The new PowerShell enhancements are great, and finally we are getting better I/O utilization allowing us to finally use more cost-effective storage.

    Where it starts to all go wrong is when Microsoft starts to lay down support rules or gives out advice that goes against the prevailing wisdom of seasoned administrators:

    • Virtualization on Hyper-V is supported, whilst other hypervisors need to be in their Server Virtualization Validation Program (SVVP)
    • Certain advanced functions such as snapshots, mixing of Exchange server roles in a VM and certain vCPU:pCPU ratios are note supported
    • Low performance disks are fine for messaging functions – what about backup/restore/AV scanning/ indexing etc?
    • Still no flexible licensing options that allow for “pay-as-you-use” or allows cost savings from multi-core processors

    Never mind that fact that there are thousands of organizations that have successfully virtualized using VMware their Exchange environments, saving serious amounts of money. Never mind that these organizations are enterprise class, and run their servers at high utilization levels receiving millions of emails daily, whilst running hourly backups and daily virus scans. Never mind that most tier-1 partners of Microsoft offer qualified support for features such as snapshots for rapid backup/recovery.

    Why then is Microsoft “scare mongering” organizations to now move to BPOS – to save money no less? The fact is that there are very very few organizations that truly know the cost of their eMail environment. Therefore, how can one say that it is too expensive to do eMail in-house?

    The basis of calculating business cases also varies wildly. It is is very difficult to put a price on the cost of operations for messaging environments – even a messaging team is not 100% utilized – and then to spread this across the cost of the total number of mailboxes.

    Indeed the cost of a mailbox per month seems to me not to be granular enough. What is the cost of a message? Who pays for inter-system messages? What about the cost of mailbox storage per month? What is the “true” cost per mailbox per month?

    The private cloud, and 100% virtualization of Exchange server in particular, is a chance that most large companies should not really pass by so easily. It is the perfect application to verify the cloud assumptions about elasticity, on-demand and metered usage to get the “true” cost of eMail services. As it is so well understood by internal administrators, a company can experience first-hand:

    • massive reduction in server resources needed with virtualization
    • resource metering per user per billiable time period
    • billing systems alignment to cost of eMail services per user per month
    • operational process alignment for the Private Cloud way of doing things
    • eGRC can be applied and enforced with the necessary controls/tools
    • infrastructure business intelligence for zeroing in on further cost consolidation areas
    • provide basis for your internal Private Cloud – complete with self-service portals and end-2-end provisioning

    I always say that eMail is in some ways easier to virtualize than high-end database environments such as SAP. Too much time is lost in the difficulties, and the organization gets too little of Cloud benefits as a result. The time-2-value and order-2-cash processes take too long with that approach.

    With Exchange virtualization you can literally get started in a week or two once the infrastructure is on the ground – there are plenty of blueprints that can be utilized.

    Why is this important for the CIO?

    The CIO has the responsibility for setting IT direction in an organization. Simply following the scare-mongering of either vendors or outsourcing service providers will inevitably force you to move what may be a vital function for new product development out of your organization. Aside from this, there are many issues still regarding data confidentiality, compliance, and risk concerns that need to be tackled.

    Personally, I would advise large enterprise shops to look at virtualizing their entire Microsoft estate, starting with Exchange Server. This is not only going to make deep savings, but, as experience shows, also provides better service with less downtime than in the past. You choose the type of differentiated service you would like to offer your users. You decide what services to include, with some being mandatory like AV/malware/spam scanning.

    Use this as the basis of creating your Private Cloud, and start to gradually migrate entire services to that new platform, whilst decommissioning older servers. Linux is also part of that x86 server estate, and the obvious questions related to replatforming to the x86 server basis away from proprietary RISC architectures.

    Innovation is an area where particular emphasis should be applied. Rather than your IT organization putting the brakes on anything that looks unfamiliar, you should be encouraging innovation. The Private Cloud should be freeing up “time” of your administrators.

    These same administrators could be working more on IT-Project liaison roles to speed time-2-value initiatives. They can be creating virtual environments for application developers to get the next applications off the ground using incremental innovation with fast development cycles to bring new features online.

    Once you are running all virtual, you will have a very good idea of what things really cost, whether to optimize CAPEX/OPEX levels and compare against the wider industry to determine whether you are offering fair value for IT services to your user community.

    Let legislation about data jurisdictions and chains of custody also mature. Push vendors for better terms on per processor licensing, allowing “pay-as-you-use” models to come into play. Not on their terms in their Clouds, but on your terms in your own Private Cloud initially. Remember, there are always choices in software. If Exchange+Microsoft won't cut it for you, then use an alternative e.g. VMware+ZimbraWink

    Public Cloud offerings are not fully baked yet, but they represent the next wave of cost consolidation. Recent high-profile outages at Amazon, Google, Microsoft as well as those “un-publicized” failures show seasoned veterans that there is probably another 2 years to go before full trust in Public Clouds is established with the necessary measures to vet Cloud Provider quality. Remember, one size does not fit all!

Powered by Community Server (Personal Edition), by Telligent Systems