La Vie en Chrome

This piece reflects the personal experience and opinions of Asha Kinney and does not reflect the opinions or recommendations of the Hampshire College IT Department 🙂 .  

This past summer, the “m” key broke off my ancient PC laptop. I was aware that its days were numbered, but this really hammered it home. I was also starting to realize how quaint it was to have all my family photos stored on a local device- yes, they were backed up, but still. I realized it was time to act in a manner befitting an IT professional and get all that stuff into the cloud, one way or the other.

At around the same time, I discovered that through my Marlboro College alum account I had unlimited cloud storage on Google Drive. Giddy up! I proceeded to upload 200 gigs worth of photos and bade farewell to the Dell of yesteryear. However, I still needed a computer to work on. My work Mac was also starting to feel its age. It took about 15 minutes to boot up in the morning, and a day without a spinning beach ball was a very lucky day indeed.

Enter the Chromebook.

A Chromebook is a small, cheap ($200-$300) laptop that is neither Mac nor PC nor Linux but it’s own thing- totally Google, running Chrome OS. The concept is that your electronic life is entirely in the cloud, and nothing is stored on, or run off of, the actual computer. The computer runs a web browser, and that’s about it.

This reliance on cloud services means the computer itself doesn’t have to do much work, and this means there’s not much to go wrong. It’s super fast. It boots up in three seconds. There are no spinning beachballs, ever. Did I mention the $200 part?

Here’s what it’s great for:

  • Email- using Gmail or Webmail or similar web service.
  • Basic productivity stuff using Google Docs/Slides/Spreadsheets. Any basic file in a Microsoft Office format can be imported and worked on just fine. The computer uses your Google Drive as its main file system. Some stuff can be stored locally, but not much.
  • Managing photos, media, music, etc (which can all be stored on Google).
  • Using any other service or site you access through a web browser. This is nothing to sneeze at, as there are even photo editing, page layout/design, and video editing apps now available through web interfaces.

However, there are lots of people for whom a Chromebook will simply not work. Do any of these apply to you? If so, you’re going to hit a wall pretty quickly.

  • You’re wedded to an email client like Apple’s Mail, or Thunderbird.
  • You rely on Adobe products or like to annotate PDF’s.
  • You want to connect to Hampshire’s network printers or file servers.
  • You use the super-advanced features of Microsoft Office programs.
  • You require any specialized software that is not available through a web browser (and/or does not work with the Chrome browser).

Even with these caveats, you can see how the Chromebook is a great choice for the average person doing average stuff. Many higher-ed and K-12 systems are going Google and are giving Chromebooks to their students. The Amherst Regional System just announced their move to Google and subsequent purchase of Chromebook carts for classrooms.  

Does this all mean selling your soul to Google on a personal or institutional level? Yes, yes it does. Increasingly, though, schools and colleges do the math and decide that they simply cannot afford the luxury of NOT going Google. A lone institution would never be able to provide the cutting edge tools and systems that Google does. Similarly, a $200 computer is a huge leveler for students who cannot shell out $2,000 for a shiny new Mac. Thus, it becomes an internal war of conscience between providing your students access to the best possible tools, versus protecting what little privacy they have left. Google makes this choice easier for schools by claiming to protect the privacy of educational accounts, but do they really?

But let’s get back to our little friend, the Chromebook. I spent about six months living very happily with it. It required re-thinking how I did certain work, but I was able to find a way to accomplish most tasks. This is saying a lot given my job as an instructional technologist! When I hit a wall (see list above) I bumped to my Mac.

But then, something happened. I decided to give my Work Mac a fresh start and wipe the hard drive. It perked right up. No more spinning beach balls. And lo and behold, it could do everything I needed to do, without my having to hunt around for the way to do it. I was forced to admit that for me personally, and with my particular job, I really do need a computer that can just do everything, with no limitations or qualifications.

The Chromebook is still great and useful for lots things, but to loosely quote the late, great Queen of Camden: my odds were stacked, and I went back to Mac.

 

As we head into Fall

The change of seasons is upon us and the life-cycle of the tomato plants at the farm center has me thinking about life-cycles in general. I have been doing  some reading lately about the life-cycles of organizations. I found this great GIF on the web that illustrates the challenges faced by maturing organizations.

 

product-life-cycle-r0505e_a

 

http://suifaijohnmak.files.wordpress.com/2012/10/product-life-cycle-r0505e_a.g

The real question I have is where are we on this curve? I suspect that we are reaching maturity, which in and of itself is not problematic if addressed properly. The key to success in the future will be actually tracking where we are on this curve and crafting a strategy to move forward.

There is a great deal being written recently in various publications about the topic of Design Thinking. Roger Martin has a great book that is a quick read: http://www.amazon.ca/Design-Business-Thinking-Competitive-Advantage/dp/1422177807

The book is a good primer on the subject and one that I would recommend. We will all soon be involved in the planning of the College’s future. New research around ideas like Design Thinking will help us to answer the tough questions as we move forward and allow Hampshire to once again become disruptive in the marketplace.

Bob C

Five years and a crystal ball

The latest thinking from Gartner.

 

Bob C

 

 

Click here to find out more!

From Network World:

This story appeared on Network World at
http://www.networkworld.com/news/2012/102212-gartner-trends-263594.html

 

Gartner: 10 critical IT trends for the next five years

Software-defined networks, data explosions, hybrid clouds will test IT prowess

By , Network World
October 22, 2012 03:26 PM ET

ORLANDO — Trying to stay ahead of the curve when it comes to IT issues is not a job for the faint of heart. That point was driven home at Gartner’s IT annual IT Symposium fest here where analyst David Cappuccio outlined what he called “new forces that are not easily controlled by IT are pushing themselves to the forefront of IT spending.”

The forces of cloud computing, social media/networking, mobility and information management are all evolving at a rapid pace. These evolutions are largely happening despite the controls that IT normally places on the use of technologies, Cappuccio stated. “IT was forced to support tablets, and end users forced them to support IM andwireless networks a few years ago. And more such technologies are on the horizon,” he said.

Cappuccio’s presentation listed the following as the “Ten Critical Trends and Technologies Impacting IT During the Next Five Years.” The following is taken from Cappuccio’s report:

1. Disruption: Business users expect the same level of IT performance and support as they experience with consumer-based applications and services. Business-user demand for customer satisfaction is far outstripping the IT support organizations supply. IT organizations must invest in the development of IT service desk analyst skills and attributes, and organize appropriately to increase IT’s perceived value to the rest of the organization. Business-user satisfaction can be a moving target, but enabling higher levels of productivity at the IT service desk level demonstrates that the IT organization cares about the business, and that it’s committed to ensuring that users meet their goals and objectives. While a focus on traditional training, procedures, security access, knowledge management and scripts is warranted, a focus on next-generation support skills will be paramount to meet the needs and expectations of the business more efficiently.

2. Software Defined Networks: SDN is a means to abstract the network just as server virtualization abstracts the server. It transforms the network topology from box/port at a time configuration to flow at a time — linked to application. Abstracts the network like a hypervisor abstracts the server and it gives programmatic control. With SDN the controller has a view of the entire network topology both the virtual and physical components of it including switches, firewalls, ADC, etc. and provides the abstracted view to provisioning and managing the network connections and services that the applications and the operator requires.

OpenFlow is a great example of that generalized network tunneling protocol that provides a generic API that any network operator can use to create his own control and management schemes based on the application requirements of his organization. And there will be other OpenFlow type SDN protocols that are designed ground up from an application level logic than from the traditional network paradigm of protocol, device and link-based thinking.

When used along with encapsulations like OpenFlow SDN can be used to dynamically extend a private cloud into a hybrid model to masking the enterprise specific IP addresses from the cloud provider’s infrastructure. SDN also promises to allow service providers to offer dynamic provisioned WAN services, potentially across multi-provider/multi-vendor networks. Of course, there is the potential for significant organizational disruption as traditional network skills begin to shift, and alignment with specific vendor products or platforms becomes less rigid.

3. Bigger data and storage: A fact that data centers have lived with for many years remains true today: Data growth continues unabated. From an IT perspective, one of the main issues is not awareness of the issue, but prioritization of the issues. We have spent so many years dealing with this, and surviving, that storage management projects are usually initiated from the ground up, rather than top-down, relegating many of these to “skunkworks” status with little long-term funding.

Leading-edge firms have realized the problem and are beginning to focus on storage utilization and management as a means to reduce floor space usage and energy usage, improve compliance and improve controls on growth within the data center. Now is the time to do this, because most of the growth during the next five years will be in unstructured data — the most difficult to manage from a process or tool point of view. Technologies that will become critical over the next few years are in-line deduplication, automated tiering of data to get the most efficient usage patterns per kilowatt, and flash or SSD drives for higher-end performance optimization, but with significantly reduced energy costs. NAND pricing continues to improve at a rapid pace, moving from $7,870 per gigabyte in 1997 down to $1.25 per gigabyte today — and this trend will continue.

4. Hybrid clouds: Vendors increasingly use cloud computing as a marketing label for many old technologies and offerings, devaluing the term and trend. Although cloud computing is a natural evolution of various enterprise and Web-based technologies and trends, it is a mistake to simply relabel these older technologies as “cloud computing.” This new computing model drives revolutionary changes in the way solutions are designed, built, delivered, sourced and managed.

Cloud computing is heavily influenced by the Internet and vendors that have sprung from it. Companies such as Google deliver various services built on a massively parallel architecture that is highly automated, with reliability provided via software techniques, rather than highly reliable hardware. Although cost is a potential benefit for small companies, the biggest benefits of cloud computing are built-in elasticity and scalability, which reduce barriers and enable these firms to grow quickly. A hybrid cloud service is composed of services that combine either for increased capability beyond what any one of them have (aggregating services, customizing them, or integrating two together), or for additional capacity.

There is an emerging trend in hybrid data centers whereby growth is looked at from the perspective of applications criticality and locality. As an example, if a data center is nearing capacity, rather than begin the project to define and build another site, workloads are assessed based on criticality to the business, risk of loss, easy of migration, and a determination is made to move some workloads either to co-location facilities, hosting, or even to a cloud type service. This frees up floor space in the existing site for future growth, both solving the scale problem, and deferring capital spend for potentially years. An alternative to this is for older data centers to begin migrating critical work off-site, thus reducing downtime risks and business interruptions, while freeing up the old data center for additional work (non-critical), or for a slow, in-place, retrofit project.

5. Client server: In the PC world of the last quarter century, both the operating system and application were primarily resident on the desktop (some large and complex applications such as ERP were located on servers that could be remote from clients). Today, anything goes! The operating system — as well as the application — can be executed on the PC or a server — or streamed to a PC when needed. Choice of architecture is dependent on user needs and the time frame for implementation. No longer does one size fit all.

Regarding Windows 8 deployments, 90% of enterprises will bypass broad scale deployment, and will focus on optimized Windows 8 deployments on specific platforms (e.g., mobile, tablet) only. Servers have been undergoing a long-term evolutionary process. They have moved from stand-alone pedestals to rack-mounted form factors in a rack cabinet. The latest step in x86 server hardware evolution is the blade server. It has taken hardware from just single servers with internal peripherals in a rack cabinet to a number of more dense servers in a single chassis with shared back plane, cooling and power resources. A true component design allows for the independent addition of even more granular pieces like processors, memory, storage, and I/O elements.

As blades have grown, so has the marketing push from server providers to position blades as the next most advanced technical step in server evolution and even, in some cases, as the ultimate server solution. It always take a closer examination of multiple factors — required density, power/cooling efficiency requirement, high availability, workload etc. — to reveal where blades, rack and skinless really do have advantages. Moving forward this evolution will split into multiple directions as appliance use increases and specialty servers begin to emerge (e.g., analytics platforms).

6. The Internet of Things: This is a concept that describes how the Internet will expand as physical items such as consumer devices and physical assets are connected to the Internet. The vision and concept have existed for years; however, there has been acceleration in the number and types of things that are being connected and in the technologies for identifying, sensing and communicating. Key advances include:

Embedded sensors: Sensors that detect and communicate changes (e.g., accelerometers, GPS, compasses, cameras) are being embedded not just in mobile devices but in an increasing number of places and objects.

Image recognition: Image recognition technologies strive to identify objects, people, buildings, places, logos and anything else that has value to consumers and enterprises. Smartphones and tablets equipped with cameras have pushed this technology from mainly industrial applications to broad consumer and enterprise applications.

NFC payment: NFC allows users to make payments by waving their mobile phone in front of a compatible reader. Once NFC is embedded in a critical mass of phones for payment, industries such as public transportation, airlines, retail and healthcare can explore other areas in which NFC technology can improve efficiency and customer service.

7. Appliance madness: Organizations are generally attracted to appliances when they offer hands-off solutions to application and functional requirements, but organizations are also repelled by appliances when they require additional investments (time or software) for management functions. Thus, successful appliance products must not only provide a cost-effective application solution, they must require minimum management overhead.

Despite the historical mixed bag of successes and failures, vendors continue to introduce appliances to the market because the appliance model represents a unique opportunity for a vendor to have more control of the solution stack and obtain greater margin in the sale. In short, appliances aren’t going away any time soon. But what’s new in appliances is the introduction of virtual appliances. A virtual appliance enables a server vendor to offer a complete solution stack in a controlled environment, but without the need to provide any actual hardware. We see virtual appliances gaining popularity and fully expect to see a broad array of virtual appliance offerings emerge during the next five years. However, the growth in virtual appliances will not kill physical appliances; issues such as physical security, specialized hardware requirements and ecosystem relations will continue to drive physical requirements.

The very use of the appliance terminology creates great angst for some vendors and users — particularly for physical appliances. Strictly speaking, a highly integrated platform like Oracle’s Exadata or VCE Vblock is not a true appliance; these are factory integrated systems that will require some degree of configuration and tuning, even when the software stack is integrated; they will never fit the classic notion of a “pizza box.” But while such systems will not be consumed as appliances, they are certainly packaged and sold in a very appliance-like manner. Many other physical appliances will be more faithful to the concept — they will be plug & play devices that can only deliver a very prescribed set of services.

8. Complexity: The sources of complexity within IT are easy to spot. They include the number of initialization parameters for input into starting an Oracle database (1,600) and the number of pages (2,300) of manuals to use a Cisco switch. The complexity increases, though, when we look at combining several elements such asMicrosoft Exchange running on VMware. What makes this complexity worse, however, is the fact that we are not getting our money’s worth: Historical studies suggest that IT organizations actually use only roughly 20% of the features and functions in a system. This results in large amounts of IT debt, whose high maintenance costs for “leaving the lights on” divert needed funds from projects that can enhance business competitiveness.

9. Evolution toward the virtual data center: As we enter the third phase of virtualization (phase 1: MF/Unix, phase 2: basic x86) we see that the higher the proportion of virtualized instances, the greater the workload mobility across distributed and connected network nodes, validating fabric and cloud computing as viable architectures. As more of the infrastructure becomes virtualized, we are reshaping IT infrastructure. We will see more of the possibilities in the future where the “fabric” will eventually have the intelligence to analyze its own properties against policy rules that create optimum paths, change them to match changing conditions and do so without requiring laborious parameter adjustments. X86 virtualization is effectively the most important technology innovation behind the modernization of the data center. With it will be a sea-change in how we view the roles of compute, network and storage elements — from physical hardwired to logical and decoupled applications.

10. IT demand: With the increased awareness of the environmental impact data centers can have, there has been a flurry of activity around the need for a data center efficiency metric. Most that have been proposed, including power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE), attempt to map a direct relationship between total facility power delivered and IT equipment power available. Although these metrics will provide a high-level benchmark for comparison purposes between data centers, what they do not provide is any criteria to show incremental improvements in efficiency over time. They do not allow for monitoring the effective use of the power supplied — just the differences between power supplied and power consumed.

For example, a data center might be rated with a PUE of 2.0, an average rating, but if that data center manager decided to begin using virtualization to increase his or her average server utilization from 10% to 60%, while the data center itself would become more efficient using existing resources, then the overall PUE would not change at all. A more effective way to look at energy consumption is to analyze the effective use of power by existing IT equipment, relative to the performance of that equipment. While this may sound intuitively obvious, a typical x86 server will consume between 60% and 70% of its total power load when running at very low utilization levels. Raising utilization levels has only a nominal impact on power consumed, and yet a significant impact on effective performance per kilowatt.

Pushing IT resources toward higher effective performance per kilowatt can have a twofold effect of improving energy consumption (putting energy to work) and extending the life of existing assets through increased throughput. The PPE metric is designed to capture this effect.

Follow Michael Cooney on Twitter: @nwwlayer8 and on Facebook.

Read more about data center in Network World’s Data Center section.

All contents copyright 1995-2012 Network World, Inc. http://www.networkworld.com