Google Apps for Education at Hampshire

What if you could easily share a document with a co-worker, Five College colleague, committee member, or student?

What if you could both edit the document at the same time, roll-back to previous versions, and never have to worry about what machine that document is stored on?

Welcome to Google Apps for Education.

Hampshire College is joining our Five College partner schools and rolling out Google Apps for Education, joining the over 40 million worldwide users.

Google Apps for Education offers many features that will benefit the Hampshire community:
Unlimited Storage: Google Drive offers unlimited storage for files and folders.

Collaborate: When you share a Google Docs file, all your collaborators can view and edit the file simultaneously. While this is currently possible using personal Google accounts, having a Hampshire-specific account will make it much easier to find the people you want to share with. You will no longer have to collect gmail addresses from Hampshire collaborators and will be able to simply use their Hampshire email addresses.

Share files: You can easily share any file on your Google Drive with anyone else who has a Google account.

Large file transfers: Most email systems (such as the College’s) limit the size of file attachments, but Google Drive lets you share files of any size.
Web Conferencing: Google Hangouts allows for web conferencing with up to 15 participants.

You can now access the following services with your Hampshire login: Google Drive (including Docs, Sheets, Slides, Forms, etc); Hangouts; Groups; Maps; YouTube, and more.

Some Google services will NOT be available, however. After careful review of both Gmail and Calendar, Hampshire College’s Information Technology department has determined that both Gmail and Calendar at this time are not products that we feel provide the level of performance, support, data privacy, or data retention that the College requires. We will continue to monitor enhancements to these products as well as new offerings in the Google Apps for Education suite for future deployment possibilities.

If you have been using an older Google Drive account with your @hampshire.edu email address or Hampshire alias as your login, you must first create a new Google account with a different email address in order to retain your files. Please visit the Transferring Google Drive page here: https://www.hampshire.edu/it/transferring-google-drive-to-a-new-google-account.

To get started using Hampshire’s Google Apps, please visit: https://www.hampshire.edu/it/google-apps-for-education-at-hampshire

Let us know how it goes! We have a feedback form for these services: http://goo.gl/forms/eBix5dw7ILlX4PzS2

As we head into Fall

The change of seasons is upon us and the life-cycle of the tomato plants at the farm center has me thinking about life-cycles in general. I have been doing  some reading lately about the life-cycles of organizations. I found this great GIF on the web that illustrates the challenges faced by maturing organizations.

 

product-life-cycle-r0505e_a

 

http://suifaijohnmak.files.wordpress.com/2012/10/product-life-cycle-r0505e_a.g

The real question I have is where are we on this curve? I suspect that we are reaching maturity, which in and of itself is not problematic if addressed properly. The key to success in the future will be actually tracking where we are on this curve and crafting a strategy to move forward.

There is a great deal being written recently in various publications about the topic of Design Thinking. Roger Martin has a great book that is a quick read: http://www.amazon.ca/Design-Business-Thinking-Competitive-Advantage/dp/1422177807

The book is a good primer on the subject and one that I would recommend. We will all soon be involved in the planning of the College’s future. New research around ideas like Design Thinking will help us to answer the tough questions as we move forward and allow Hampshire to once again become disruptive in the marketplace.

Bob C

Go Team or Go Group?

Twenty miles per hour and I am guessing my heart rate is way into the anaerobic zone. Stop looking at the speedometer and concentrate, I tell myself, you need to focus on the task at hand.

The sound of hard breathing and skinny tires whizzing over pavement mixes with the rush of the wind, and that sound level increases markedly if I drift outside of the invisible draft tube that I am riding in. I am riding in a pace line of eight bicycle riders and we are tearing along Falls road in Montague, Mass.  It is exciting and fairly frightening to think my dermis’s continued continuity relies on a group of riders, many of whom I just met, working as a “Team”.

Or are we a group?

Katzenbach and Smith (2011) in their Harvard Business Review article The Discipline of Teams outline ways to tell working groups apart from teams.

Groups exhibit:

  • Individual work products
  • Clearly focused strong leadership
  • Focus is on individual accountability

Teams exhibit:

  • Shared leadership roles
  • Collective work products
  • Measurement of performance by collective output

If we examine the characteristics of both, one could argue that the pace line is in fact a team. Katzenbach and Smith further state that “A team is a small number of people with complementary skills who are committed to a common purpose, set of performance goals, and approach for which they hold themselves mutually accountable.” (Katzenbach and Smith, HBR, 2011, location 437.)

Certainly this group of people working together is bound by a common purpose. Moving fast and not getting road rash seem to be a worthy goal and purpose. The end result is group speed and reduced rider effort that cannot be achieved by a single rider alone. But if we look at this group we are lacking one key element that Gratton and Erikson (2011) in their Harvard Business Review article Eight Ways to Build Collaborative Teams state is key to sustaining high functioning teams: building on heritage relationships. The authors argue that having team members who are familiar with each other is very desirable. There is something to be said for allowing teams time to develop good social and working relationships. By allowing this “forming” time, teams develop a higher level of trust: an essential ingredient and, some might argue, the key to developing high performing teams.

This pace line that we are examining was formed from a collection of riders that really don’t know one another. It is a group that gets formed every Tuesday, and is composed of whoever shows up to do the Bike Shop’s group ride. There of course is some chance that you will get to know the regulars, but there are always a fair amount of “newbies” that show up. We don’t go over rules of behavior, roles or responsibilities; we mill around, jump on our bikes and ride. Granted it takes a while for the pace lines to form, and there is a certain level of sorting that takes place, but it is truly dynamic in nature.

Amy Edmonson in her Harvard Business Review article Teamwork on the Fly talks about a new model of team formation and team development. She states that “Teaming is teamwork on the fly: a pickup basketball game rather than plays run by a team that has trained as a unit for years. It’s a way to gather experts in temporary groups to solve problems they’re encountering for the first and perhaps only time.” (Edmonson,HBR, 2012,p.1)  I feel that this is important new thinking on the subject of team development and creation.

“Moore’s law” is a term coined around 1970 by Gordon Moore, one of the co-founders of Intel Corporation.  His basic law states that the number of transistors on a chip will double approximately every two years, thus increasing power exponentially. Nice, but how does that law apply to teams? Well it turns out that Moore’s law reflects not only the change in computing power and speed of technological advances, but it also mirrors the speed at which those changes influence societal changes. Simply put, if we were to graph the speed at which we experience change in our lives, the curve at this point would be almost vertical. We develop products and innovate rapidly and as a result need to form and dissolve teams at a dizzying pace.

Edmonson talks about stable teams and the Simmons bedding company whose teams were examples of very high performing teams. “But Simmons had what many companies today lack: reasonably stable customer preferences, purely domestic operations, and no significant boundaries that had to be crossed to get the job done.”  (Edmonson,HBR, 2012,P.1)   The world as we know it has changed, and the requirement for new methods of team development has arisen. Teaming paints a picture of rapidly developed teams who have no time to form trust relationships and have no or little stability. Teams are built and torn down as needed, to address specific and complex tasks. Teaming addresses the fast paced complex environment that exists today for many businesses.

Does this throw out conventional wisdom regarding team formation? It turns out that some traditional team development requirements actually serve this new model quite nicely.  We still need to develop a common mission for our teams and it actually becomes a very important factor, as we try to bond together, for a common purpose these diverse groups of people. Clearly defined roles and responsibilities will allow these new teams to “ hit the ground running”- we no longer can waste time in the Forming phase. We must work to remove as many obstacles as possible for teams to move through the development cycle of form, storm, norm, and perform quickly. Good working structure and a team support environment become critical.

We need to also make clear to team members that it is okay to fail, and there is no time to build trust, so we must assure team members that we will support them and build what Edmonson calls a “psychological safety net”. (Edmonson,HBR, 2012,P.2) Team members must be assured that we are measuring performance and results on collective output.

We will need to apply good project management processes like Agile in order to help these teams to get up to speed quickly and have tool sets that allow them to react to quickly changing requirements.

Methods of developing, sustaining, managing teams and working groups are changing rapidly and this requires new approaches like Teaming. Teaming and more traditional team building methods have to be mixed and combined when necessary as requirements dictate.  Alloys have different properties that the components from which they are derived. Mixing of methods and application of teams, groups or even hybrid team/group formations may result in an “Alloy” that is greater that the sum of its parts. Now more than ever proper application of team building science is required.

Bob C

Bibliography

Katzenbach,J.Smith,D.(2011). The Discipline of Teams. Harvard Buisness Review on Building Better Teams. [Kindle iPad Verison] Retrived from Amazon.com

Gratton,L.Erikson,T.(2011). The Discipline of Teams. Harvard Buisness Review on Building Better Teams. [Kindle iPad Verison] Retrived from Amazon.com

Edmonson, Amy. “Teamwork on the Fly.” Harvard Business Review. April 2012: n. page. Print.

The Best Technology for the Holidays

None. It is kind of odd that I would say that in a blog devoted to technology, isn’t it?

I will start my day many times with a 5K run along a beautiful stretch down Green Pond Rd in Montague.  Like many other runners, my companions for this ritual are my dog and, of course, my iPod.  Today’s music mix about halfway through brought up John Mayer’s “Stop this Train,” live from, I believe, a show in Berkley.  In that song, he sings “stop this train I want to get off- its moving way to fast”.   As I ran along I thought how appropriate that song was for this past week.  Indeed, we all wanted to stop the crazy train, and with it the horrific events that recently unfolded.

My best friend is of Wampanoag decent, and I have spent a lot of time with him and his family and attended many events and celebrations.  Central to Native American culture is the notion that the warrior is a key figure in that society.  This is certainly true, and the responsibility of those individuals is to protect and provide with emphasis on the latter being key, not the former. To provide brings a great level of respect for the individual and families.

Families or individuals doing something called “Giving Away” marks most events and celebrations. When you want to honor someone, you give away something of value to you. People sometimes give away large amounts of material things.  You will also typically feed everyone gathered.  The sign of a great family is a large “give away,” and providing for everyone is key.

As we start this holiday season I want to challenge you to think a little differently.  Granted, the instinct to protect and provide is certainly present, and obviously amplified given recent events. I would also ask you to Engage. Take time to unplug, unwind and actually engage with family, friends and those around you.  I have written in the past about the need to unplug and engage but I felt it was time to restate the case.

I teach First Aid at various levels, and one of the first things we need to teach people is how to engage. How to actually ask “Hey are you okay?”. We are very good at simply walking past or deciding that someone else will take care of the problem. We need to change that; we need to stop and engage people. We as a society need to get the kids away from the computer or video game. We need to get out and engage with society and the world around us. That is the only way to build the world we all want to live in. Engage.

So this holiday season, even if just for one day, leave the iPhone, Android phone, iPad, or whatever it is in the drawer and Engage and enjoy family, friends, and the season.

 

Bob C

 

Five years and a crystal ball

The latest thinking from Gartner.

 

Bob C

 

 

Click here to find out more!

From Network World:

This story appeared on Network World at
http://www.networkworld.com/news/2012/102212-gartner-trends-263594.html

 

Gartner: 10 critical IT trends for the next five years

Software-defined networks, data explosions, hybrid clouds will test IT prowess

By , Network World
October 22, 2012 03:26 PM ET

ORLANDO — Trying to stay ahead of the curve when it comes to IT issues is not a job for the faint of heart. That point was driven home at Gartner’s IT annual IT Symposium fest here where analyst David Cappuccio outlined what he called “new forces that are not easily controlled by IT are pushing themselves to the forefront of IT spending.”

The forces of cloud computing, social media/networking, mobility and information management are all evolving at a rapid pace. These evolutions are largely happening despite the controls that IT normally places on the use of technologies, Cappuccio stated. “IT was forced to support tablets, and end users forced them to support IM andwireless networks a few years ago. And more such technologies are on the horizon,” he said.

Cappuccio’s presentation listed the following as the “Ten Critical Trends and Technologies Impacting IT During the Next Five Years.” The following is taken from Cappuccio’s report:

1. Disruption: Business users expect the same level of IT performance and support as they experience with consumer-based applications and services. Business-user demand for customer satisfaction is far outstripping the IT support organizations supply. IT organizations must invest in the development of IT service desk analyst skills and attributes, and organize appropriately to increase IT’s perceived value to the rest of the organization. Business-user satisfaction can be a moving target, but enabling higher levels of productivity at the IT service desk level demonstrates that the IT organization cares about the business, and that it’s committed to ensuring that users meet their goals and objectives. While a focus on traditional training, procedures, security access, knowledge management and scripts is warranted, a focus on next-generation support skills will be paramount to meet the needs and expectations of the business more efficiently.

2. Software Defined Networks: SDN is a means to abstract the network just as server virtualization abstracts the server. It transforms the network topology from box/port at a time configuration to flow at a time — linked to application. Abstracts the network like a hypervisor abstracts the server and it gives programmatic control. With SDN the controller has a view of the entire network topology both the virtual and physical components of it including switches, firewalls, ADC, etc. and provides the abstracted view to provisioning and managing the network connections and services that the applications and the operator requires.

OpenFlow is a great example of that generalized network tunneling protocol that provides a generic API that any network operator can use to create his own control and management schemes based on the application requirements of his organization. And there will be other OpenFlow type SDN protocols that are designed ground up from an application level logic than from the traditional network paradigm of protocol, device and link-based thinking.

When used along with encapsulations like OpenFlow SDN can be used to dynamically extend a private cloud into a hybrid model to masking the enterprise specific IP addresses from the cloud provider’s infrastructure. SDN also promises to allow service providers to offer dynamic provisioned WAN services, potentially across multi-provider/multi-vendor networks. Of course, there is the potential for significant organizational disruption as traditional network skills begin to shift, and alignment with specific vendor products or platforms becomes less rigid.

3. Bigger data and storage: A fact that data centers have lived with for many years remains true today: Data growth continues unabated. From an IT perspective, one of the main issues is not awareness of the issue, but prioritization of the issues. We have spent so many years dealing with this, and surviving, that storage management projects are usually initiated from the ground up, rather than top-down, relegating many of these to “skunkworks” status with little long-term funding.

Leading-edge firms have realized the problem and are beginning to focus on storage utilization and management as a means to reduce floor space usage and energy usage, improve compliance and improve controls on growth within the data center. Now is the time to do this, because most of the growth during the next five years will be in unstructured data — the most difficult to manage from a process or tool point of view. Technologies that will become critical over the next few years are in-line deduplication, automated tiering of data to get the most efficient usage patterns per kilowatt, and flash or SSD drives for higher-end performance optimization, but with significantly reduced energy costs. NAND pricing continues to improve at a rapid pace, moving from $7,870 per gigabyte in 1997 down to $1.25 per gigabyte today — and this trend will continue.

4. Hybrid clouds: Vendors increasingly use cloud computing as a marketing label for many old technologies and offerings, devaluing the term and trend. Although cloud computing is a natural evolution of various enterprise and Web-based technologies and trends, it is a mistake to simply relabel these older technologies as “cloud computing.” This new computing model drives revolutionary changes in the way solutions are designed, built, delivered, sourced and managed.

Cloud computing is heavily influenced by the Internet and vendors that have sprung from it. Companies such as Google deliver various services built on a massively parallel architecture that is highly automated, with reliability provided via software techniques, rather than highly reliable hardware. Although cost is a potential benefit for small companies, the biggest benefits of cloud computing are built-in elasticity and scalability, which reduce barriers and enable these firms to grow quickly. A hybrid cloud service is composed of services that combine either for increased capability beyond what any one of them have (aggregating services, customizing them, or integrating two together), or for additional capacity.

There is an emerging trend in hybrid data centers whereby growth is looked at from the perspective of applications criticality and locality. As an example, if a data center is nearing capacity, rather than begin the project to define and build another site, workloads are assessed based on criticality to the business, risk of loss, easy of migration, and a determination is made to move some workloads either to co-location facilities, hosting, or even to a cloud type service. This frees up floor space in the existing site for future growth, both solving the scale problem, and deferring capital spend for potentially years. An alternative to this is for older data centers to begin migrating critical work off-site, thus reducing downtime risks and business interruptions, while freeing up the old data center for additional work (non-critical), or for a slow, in-place, retrofit project.

5. Client server: In the PC world of the last quarter century, both the operating system and application were primarily resident on the desktop (some large and complex applications such as ERP were located on servers that could be remote from clients). Today, anything goes! The operating system — as well as the application — can be executed on the PC or a server — or streamed to a PC when needed. Choice of architecture is dependent on user needs and the time frame for implementation. No longer does one size fit all.

Regarding Windows 8 deployments, 90% of enterprises will bypass broad scale deployment, and will focus on optimized Windows 8 deployments on specific platforms (e.g., mobile, tablet) only. Servers have been undergoing a long-term evolutionary process. They have moved from stand-alone pedestals to rack-mounted form factors in a rack cabinet. The latest step in x86 server hardware evolution is the blade server. It has taken hardware from just single servers with internal peripherals in a rack cabinet to a number of more dense servers in a single chassis with shared back plane, cooling and power resources. A true component design allows for the independent addition of even more granular pieces like processors, memory, storage, and I/O elements.

As blades have grown, so has the marketing push from server providers to position blades as the next most advanced technical step in server evolution and even, in some cases, as the ultimate server solution. It always take a closer examination of multiple factors — required density, power/cooling efficiency requirement, high availability, workload etc. — to reveal where blades, rack and skinless really do have advantages. Moving forward this evolution will split into multiple directions as appliance use increases and specialty servers begin to emerge (e.g., analytics platforms).

6. The Internet of Things: This is a concept that describes how the Internet will expand as physical items such as consumer devices and physical assets are connected to the Internet. The vision and concept have existed for years; however, there has been acceleration in the number and types of things that are being connected and in the technologies for identifying, sensing and communicating. Key advances include:

Embedded sensors: Sensors that detect and communicate changes (e.g., accelerometers, GPS, compasses, cameras) are being embedded not just in mobile devices but in an increasing number of places and objects.

Image recognition: Image recognition technologies strive to identify objects, people, buildings, places, logos and anything else that has value to consumers and enterprises. Smartphones and tablets equipped with cameras have pushed this technology from mainly industrial applications to broad consumer and enterprise applications.

NFC payment: NFC allows users to make payments by waving their mobile phone in front of a compatible reader. Once NFC is embedded in a critical mass of phones for payment, industries such as public transportation, airlines, retail and healthcare can explore other areas in which NFC technology can improve efficiency and customer service.

7. Appliance madness: Organizations are generally attracted to appliances when they offer hands-off solutions to application and functional requirements, but organizations are also repelled by appliances when they require additional investments (time or software) for management functions. Thus, successful appliance products must not only provide a cost-effective application solution, they must require minimum management overhead.

Despite the historical mixed bag of successes and failures, vendors continue to introduce appliances to the market because the appliance model represents a unique opportunity for a vendor to have more control of the solution stack and obtain greater margin in the sale. In short, appliances aren’t going away any time soon. But what’s new in appliances is the introduction of virtual appliances. A virtual appliance enables a server vendor to offer a complete solution stack in a controlled environment, but without the need to provide any actual hardware. We see virtual appliances gaining popularity and fully expect to see a broad array of virtual appliance offerings emerge during the next five years. However, the growth in virtual appliances will not kill physical appliances; issues such as physical security, specialized hardware requirements and ecosystem relations will continue to drive physical requirements.

The very use of the appliance terminology creates great angst for some vendors and users — particularly for physical appliances. Strictly speaking, a highly integrated platform like Oracle’s Exadata or VCE Vblock is not a true appliance; these are factory integrated systems that will require some degree of configuration and tuning, even when the software stack is integrated; they will never fit the classic notion of a “pizza box.” But while such systems will not be consumed as appliances, they are certainly packaged and sold in a very appliance-like manner. Many other physical appliances will be more faithful to the concept — they will be plug & play devices that can only deliver a very prescribed set of services.

8. Complexity: The sources of complexity within IT are easy to spot. They include the number of initialization parameters for input into starting an Oracle database (1,600) and the number of pages (2,300) of manuals to use a Cisco switch. The complexity increases, though, when we look at combining several elements such asMicrosoft Exchange running on VMware. What makes this complexity worse, however, is the fact that we are not getting our money’s worth: Historical studies suggest that IT organizations actually use only roughly 20% of the features and functions in a system. This results in large amounts of IT debt, whose high maintenance costs for “leaving the lights on” divert needed funds from projects that can enhance business competitiveness.

9. Evolution toward the virtual data center: As we enter the third phase of virtualization (phase 1: MF/Unix, phase 2: basic x86) we see that the higher the proportion of virtualized instances, the greater the workload mobility across distributed and connected network nodes, validating fabric and cloud computing as viable architectures. As more of the infrastructure becomes virtualized, we are reshaping IT infrastructure. We will see more of the possibilities in the future where the “fabric” will eventually have the intelligence to analyze its own properties against policy rules that create optimum paths, change them to match changing conditions and do so without requiring laborious parameter adjustments. X86 virtualization is effectively the most important technology innovation behind the modernization of the data center. With it will be a sea-change in how we view the roles of compute, network and storage elements — from physical hardwired to logical and decoupled applications.

10. IT demand: With the increased awareness of the environmental impact data centers can have, there has been a flurry of activity around the need for a data center efficiency metric. Most that have been proposed, including power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE), attempt to map a direct relationship between total facility power delivered and IT equipment power available. Although these metrics will provide a high-level benchmark for comparison purposes between data centers, what they do not provide is any criteria to show incremental improvements in efficiency over time. They do not allow for monitoring the effective use of the power supplied — just the differences between power supplied and power consumed.

For example, a data center might be rated with a PUE of 2.0, an average rating, but if that data center manager decided to begin using virtualization to increase his or her average server utilization from 10% to 60%, while the data center itself would become more efficient using existing resources, then the overall PUE would not change at all. A more effective way to look at energy consumption is to analyze the effective use of power by existing IT equipment, relative to the performance of that equipment. While this may sound intuitively obvious, a typical x86 server will consume between 60% and 70% of its total power load when running at very low utilization levels. Raising utilization levels has only a nominal impact on power consumed, and yet a significant impact on effective performance per kilowatt.

Pushing IT resources toward higher effective performance per kilowatt can have a twofold effect of improving energy consumption (putting energy to work) and extending the life of existing assets through increased throughput. The PPE metric is designed to capture this effect.

Follow Michael Cooney on Twitter@nwwlayer8 and on Facebook.

Read more about data center in Network World’s Data Center section.

All contents copyright 1995-2012 Network World, Inc. http://www.networkworld.com