Archive

Posts Tagged ‘Scott Fulton’

Top 10 Windows 8 Features #5: Live Performance and Reliability Charts

May 31st, 2012 05:30 admin View Comments

Perhaps the most visible change to the Windows 8 Desktop (the “old” half of Microsoft’s new operating system) will come from live performance graphs. Improved heuristics in the new Task Manager and all-new graphs in Windows Explorer, of all places, will tell you when “burst mode” kicks in and when your hard drive is slowing down.

In this 10-part series, 26-year veteran Windows tester Scott Fulton walks us through the best features, faculties and functions of Windows 8.

Performance, to the user, is the impression that things are moving smoothly and promptly. It’s been said that if you find yourself measuring performance, it’s because you don’t have it – although now that I think about it, a politician may said this, not an IT pro. With Windows 8, Microsoft is becoming more comfortable with giving everyday folks easier access to meaningful performance metrics – and by “meaningful,” I mean “not the Windows Experience Index.” 

Making Performance Meaningful

Mind you, there have been performance monitoring tools for Windows for quite some time. The screenshot above shows Windows 7 (not Windows 8), and in the upper right is Task Manager, with a graph from its Performance tab. The window on the lower right is an independent tool called Resource Monitor, which can be pulled up from Task Manager. The problem with the original Sysinternals tools (maybe the only one, really) is that they present their data in very rich detail, making it somewhat inaccessible to the amateur. Like the tax code, you can decipher it easily enough if you know what it means.

Last week, one of my hard drives began failing on my multiboot test system. What I noticed first, however, looked like a complete system performance drop-off. I noticed the trouble in both Windows 7 and Windows 8, so I expected a hardware failure. But I wasn’t certain: Perhaps the addition of a certain Media Center device driver to both operating systems was interfering with both systems.

It was the new Task Manager in Windows 8 that gave me the evidence I needed of a cache memory failure on one of my disks. I can’t show you the exact moment I saw that data, but I can show you where it was: It’s in the Task Manager window. Don’t you see it? In the screenshot above, it’s on the right.

The Performance tab on Windows 8′s new Task Manager presents basic utilization charts in a much more easy-to-read format that dispenses with the “stereo volume bars” nonsense. (It’s a computer, folks, not a graphic equalizer.) And in its new format, you can select Options > Always On Top, then right-click the window and select Summary View to eliminate the window dressing and narrow it down to just the contents. In this form, you can stretch and size the contents, and then drag them to some out-of-the-way place.

As the screenshot at the top of this article indicates, yes, Task Manager and Metro-style apps can co-exist, and not by means of Metro’s funky app-snapping mechanism. For that shot, I double-clicked on the CPU utilization chart to have Task Manager center on that chart only. In case you’re wondering, CPU utilization in the context of Windows 8 means the relative workload at any one time for all cores in the CPU collectively. On quad-core PCs, Windows 7-based utilization tools tend to render results on a 400-point scale. So folks who think their Web browsers hog CPU cycles at 105% utilization (as if that’s really possible) might be interested to learn that their processor is really being taxed at about 26%. Here in Windows 8, the chart correctly registers full utilization at 100%.

In Windows Explorer (the file manager program), every version of Windows since Windows 95 has had a progress bar indicating the relative completion of long copy operations. In Windows 8, the optional “More Details” view (which you can turn on once and leave on) reveals a heuristic graph showing the bandwidth of transfers in progress, inside a taller version of the progress bar. I was able to isolate my failing hard drive as the cause of the system slowdown by attempting a big copy operation, then watching the graph. Rather than a consistently slow copy, I could see that the bandwidth had been reduced to zero, then would occasionally burst to full and regular speed and maintain that speed for as much as a minute. That’s behavior consistent with a hard drive that either has a failing memory cache or whose regulator is not reporting it’s spinning at a consistent speed.

A Full Rundown That Makes Sense

Finally, for the first time, Windows now has no problem actually showing you, on a graph, when and how it’s crashed over time. One of my absolute favorite new performance graphs in Windows 8 is called Reliability Monitor. Think of it as a crash dump you can actually read.

The line at the top represents a “Reliability Index,” which represents Windows’ own internal assessment, on a 10-point scale, of how well its internal services, functions and drivers are working. It’s surprisingly harsh on itself, grading itself gradually higher for each hour that no critical or nominal events occurred, but scoring itself down the moment something does go wrong. While the index itself may be an unimportant number, the telltale stair-steps in the line tell you when critical events occurred, as plotted against the calendar axis along the bottom.

To get a full report of every Windows-related event that happened on a particular day, including ordinary things like patches, updates and software installations, you click on the day in question. Now, how many times has this happened to you: You install something that Windows Update said you must have, and suddenly your performance slows down. With this chart, you can verify your suspicions: You can see the time when you installed the update (or when your PC installed it automatically), and you can gauge the relative performance drop compared to before the installation.

Microsoft is famous for burying great new features behind mountains of junk, and Reliability Monitor appears to be no exception. The company may change things for the final version, but in the Consumer Preview, you find Reliability Monitor like this: In the Notification Area, click the Action Center flag, and from the pop-up, select Open Action Center. Click the Maintenance category, then from the choices that slide down below it, locate and click the hyperlink View Reliability History.

Hardware fails and software fails. Operating systems fail even more often. These are everyday facts of life, which won’t change just because some PC users switch to tablets. The way to cope with and overcome everyday failures is through information – through being able to perceive what’s wrong rather than having to guess.

In the Microsoft Vista era, when a security feature was tripped, rather than give you any clue as to why it was failing, the Vista screen went completely black. It’s a bit like having a disease, but instead of being told the diagnosis, having all the doctors in the hospital lock themselves in their offices. With that type of response, you might conclude it’s not only fatal but contagious. And that could be for something minor like a misplaced Registry entry.

Being explicit about its own performance in Windows 8 tells me that Windows is finally maturing. After three decades of this, it’s about time.

Source: Top 10 Windows 8 Features #5: Live Performance and Reliability Charts

Internet Society: There’s Room for Compromise on Net Neutrality

May 29th, 2012 05:00 admin View Comments

The group of caretakers of Internet standards and practices has been on record supporting fundamental principles linked with net neutrality. But in Part 3 of ReadWriteWeb’s interview with Sally Wentworth, the Internet Society’s public policy spokesperson says that enabling a service provider to engineer a competitive advantage for itself may also be considered fair.

If you go back and read every major story on the topic of net neutrality, you may still come away asking the basic question: What is it, really? 

Net Neutrality is one of those strange political topics that you think you understand until you begin studying it. From one side of the aisle, you’ll be told it’s about one price for one tier of bandwidth; from the other, it’s all about deregulation. If you deregulate a market, you don’t get one price.

The Internet Society (ISOC) – the coalition of caretakers of Internet standards and practices, including the IETF (Internet Engineering Task Force) – is on record as supporting equal access to the Internet for all the world’s citizens. That support has been trumpeted under headlines stating it supports net neutrality.  But it’s presumptions like this that can get a global consortium into trouble, especially with member countries and other stakeholders who think the Society has just endorsed their particular interpretation of Net Neutrality.

In this third and final part of our interview with ISOC Senior Public Policy Manager Sally Wentworth, we talked about how difficult it is to navigate a path that has a dozen different maps.

Scott Fulton, ReadWriteWeb: When I ask Internet companies what’s the number one policy issue they face, four out of five say it’s “net neutrality.” Their definitions vary because net neutrality means different things to different people. But one thing they can agree upon is the notion that bandwidth should be like tofu: plain, homogenized and one level – not stratified. It should be available for all companies at the same, essentially fair, rates. So how does that get regulated and by whom? Is this something we must leave to individual countries to sort out, or do we need to start negotiating an international regulatory framework that countries can follow for their own best interests?

Sally Wentworth, Senior Public Policy Manager, The Internet Society: My first view would be that there is no one-size-fits-all approach to something like this, because different countries really do have different economic models with respect to their communications markets. The competition levels in the United States are different from what we’ve seen in Europe, and certainly different from what you might find in emerging markets, for example. So it’s difficult to provide a single regulatory solution that’s going to be applicable or successful [for all countries].

Having said that, I think there are some basic principles that apply. That’s been our view, because we had a similar experience to you. We said, “Okay, we should say something about net neutrality,” and [the responses we got sounded like], “What is it you want me to comment on?” “Net neutrality” seems to be a buzzword that means lots of different things to lots of different people. So if we step back from that and say, “What are the basic principles that should apply?” we have said that we believe that people should have access to the legal content of their choosing. They should have competition among carriers. And there should be transparency between the user, the consumer and the provider as to what network management means, and how it’s been implemented for their service. We do recognize that networks are managed, but they should not be managed in ways that are anti-competitive or discriminatory.

When you step back from that, different countries may have different tools or require different policy measures to fulfill those principles. Maybe the solution in some country is to really focus on promoting competition in the marketplace. Other countries might say that’s less of an issue; [they may say], “We have plenty of last-mile competition,” but there’s an issue with transparency and making sure customers know what the terms of service really mean. Like I said, I don’t think there’s a one-size-fits-all [solution]. But in the end, if you think of having an Internet experience – you go online, and [as far as you can tell], This Is The Internet – then those are the principles that should apply.

“I think it’s fair to say we will see innovation on the technical side; we’ll also see innovation in the business models and business cases. And that’s probably a good thing.”

- Sally Wentworth, senior public policy manager, The Internet Society

RWW: Comcast has a system in place right now where subscribers can get access to first-run, on-demand movies that are delivered via Internet, without that service applying to the customer’s download caps. Subscribers can’t do the same thing with their Netflix service even if they subscribe to Netflix over Comcast. A lot of people are claiming that’s not competitive. Comcast says it can offer the service like this because it’s going through content delivery networks like Akamai, they are the networks of Comcast’s choosing, and it makes delivery [of this service] cost-effective.

You talked about fairness and about preserving competition. In a lot of people’s minds, those two are joined at the hip. And there are some who would make the case that, if you’re going to promote competitiveness, then part of being competitive is engineering an advantage for yourself and not necessarily giving that advantage to your competitor. Don’t you think it’s important that innovation, with respect to competitiveness, should enable the innovators to have some type of competitive advantage, even if it’s for a limited period of time?

SW: Yes, I suppose I probably would. There is, and always has been, in the Internet space, new and innovative business models. Some of those business models have survived, and some of them haven’t. In the cases where they haven’t, then oftentimes either somebody’s come up with a better idea or a better application, or the application itself didn’t meet the expectations of the end user. Yeah, I think it’s fair to say we will see innovation on the technical side; we’ll also see innovation in the business models and business cases. And that’s probably a good thing. 

 


Stock photo by Shutterstock.

Source: Internet Society: There’s Room for Compromise on Net Neutrality

Internet Society: ICANN, Internet Transitions and Why IPv4 Won’t Die

May 21st, 2012 05:29 admin View Comments

When your job is to be open to everyone’s ideas, sometimes the hardest part for you is to just go with the right one. In Part 2 of ReadWriteWeb’s interview with Internet Society (ISOC) senior public policy manager Sally Wentworth (Part 1 of which was published on Thursday), we discuss how difficult it can be to navigate the routes of change in Internet architecture, especially when everyone out there – ICANN, Comcast, Russia, etc. – seems to have a different idea.

While maybe hundreds of white papers are published every day leading off with the statement that the Internet is changing so very rapidly, with respect to its technical underpinnings, real change has been dreadfully slow. The exodus of Internet Protocol hosts to an IPv6 address system whose benefits are almost undisputed has yet to begin after nearly two decades of initiatives. And the top-level domain system that helped make Web addresses friendlier to the world than phone numbers is rapidly disintegrating into a bizarre carnival of conflicting interests and outrageous conduct. It’s a state of affairs that gives justification to proposals like that of Russian President Vladimir Putin: that the Internet be brought under more direct governmental control.

 


Scott Fulton, ReadWriteWeb: It’s hard to take a look at just what’s happened in the past few years, with respect to the opening up of the Top Level Domain system by ICANN, and not say there’s been a little bit of chaos there as a result of it. It seemed to many a good idea at the time to diversify and take the proposition of building top-level domains into a multilingual system one step further and expand it into a multi-social system. I myself spoke out in favor of the .XXX top-level domain, because I believed that it was useful to give people a very easy way to turn off a certain channel of content that they don’t want to see.

But since that time, I was cooking dinner the other day in my kitchen, and I had one of these basic cable channels on. And one of the ads was trying to sell me on the idea of getting my own .XXX domain name, the reason being that I should get mine before someone else does. Almost an implied blackmail there: If I’m a sensible person, then I don’t want my name associated with porn. We’re starting to see an opening up of the idea of private interests purchasing their own top-level domains, building up more on-the-fence TLDs like .vegas. Which makes me think it’s going to be hard to make the case that a deregulated model for Internet governance always works. When you have a chaotic example like that, it’s easy for someone like President Putin to simply point to the .XXX domain and say, “Look what happened there! Do you want this to happen to the rest of the Internet?”

Isn’t the final solution maybe something in between complete deregulation and complete government centralized control?

Sally Wentworth, Senior Public Policy Manager, The Internet Society: We have never said there is no role for government or for public policy. There is a role for public policy at the national level, and in some cases, there are quite good examples of international cooperation among policy makers. I’m not sure we’re talking about an either/or. One question is, how is the policy developed? Policy that’s developed in the back room, with limited transparency and a small number of interested parties, is likely not going to produce a result that is going to be good for end users or the Internet itself.

“Policy that’s developed in the back room, with limited transparency and a small number of interested parties, is likely not going to produce a result that is going to be good for end users or the Internet itself.”

Sally Wentworth
Senior Public Policy Manager
The Internet Society

The question is, how is the policy developed? Who is allowed to participate? And is there sufficient sunshine/transparency/collaboration/participation by the relevant stakeholders in developing the policy so that a good outcome can be produced for that country, for that community? There are plenty of public policy initiatives that are quite useful. We see emerging a large number of broadband plans around the world, ICT strategies by governments to help promote the deployment of IPv6, strategies to try to address things like cybersecurity or consumer protections. These are all constructive, but again, it’s the direction of, can you do this, or are governments doing this in a way that is participatory and transparent and involves the relevant stakeholders, or is this just the providence of a few people in a room making policy for everyone?

RWW: I spoke a few months ago with Richard Jimmerson [who leads ISOC's Deploy360 IPv6 awareness campaign] about ISOC’s efforts to incentivize the transition to IPv6. He told me a story about how a lot of commercial vendors are failing to explain IPv6 to their customers because they can’t come up with the “value-add” message: “IPv6 gives you _____” They don’t seem to be capable of filling in the blank. If ISOC truly is the multistakeholder model that you want it to be, how come so many companies, after so long – since IPv6 has been with us for decades – don’t have an understanding of the tremendous benefits that IPv6 offers?

SW: I think sometimes it is a matter of selling the story. Also, the resources haven’t yet to come to a point where there is a requirement to transition.

We are seeing quite a few companies worldwide now that see [IPv6] as an imperative. The question now is, can we convince them to build it as a transition, rather than them being confronted with a problem at some point? You want this to be a smooth transition to IPv6 rather than a difficult one, so they can be able to test it. So sometimes these technical issues aren’t as front-and-center, but… this is fundamentally good for the overall Internet. And there will be a transition; it will happen.

RWW: It’s hard to say that’s inevitable. I’ve covered IPv6 since the 1990s, and I’m starting to think the IPv6 transition might not necessarily happen in my lifetime anymore. It doesn’t seem to want to be a one-time event. It’s not like the VHF-to-digital transition in the United States, where we just threw a switch.

SW: Oh, no, absolutely not. I was at the White House when we did that! Very different!

RWW: The VHF transition was a magnificent success, for all intents and purposes. I think it went very smoothly, and that had to have confused millions of people who, despite all the news, didn’t quite understand what was going on. Their TVs weren’t working! But we made it work, one way or the other. And you’d think that if it were as simple as throwing a switch, we’d engineer the switch, and we’d have Microsoft and Red Hat and all the operating systems vendors create in their systems something that’s an obvious switch for the administrators to throw. And say, “On January 14, 2014, you press this button.” What’s to stop them from doing that?

SW: It’s hard because IPv4 is not going to disappear after a certain date. There will still be content and devices that depend upon IPv4. So what’s going to happen is – and we’re already seeing it happen – IPv6 is going to live on alongside IPv4. I think there will be – and there increasingly is – a move by companies, as they see others in the marketplace and as they see it’s good for their long-term business interests, they are making that transition. They’re deploying IPv6 already, and many of their devices are ready for it. There will be a preponderance of traffic that will move to IPv6. But it will not be a switch like the digital television transition.

It’s also not something that’s necessarily visible to the end user. I may or may not be aware of whether my network is running IPv6 or not, or whether my device is IPv6-capable. This is going to be something that happens over time as devices and content become IPv6-accessible.

Administrators do need time to test this in their networks, and that’s why we do things like World IPv6 Day last year, and World IPv6 Week this year. The goal this year is that you turn it on and you leave it on. Last year, a lot of companies had a chance to turn it on, test it, see what went right, what went wrong, where we needed to do more work. What we also found, though, is that a lot of companies left it on, because they didn’t have the problems they thought they would.

 


Photo credit: The Internet Society, 2012

Source: Internet Society: ICANN, Internet Transitions and Why IPv4 Won’t Die

Top 10 Windows 8 Features #6: Secure Boot

May 16th, 2012 05:00 admin View Comments

It’s the single greatest dilemma of modern society: How much freedom would you trade to get more security – or vice versa? Since Windows XP became the most exploited operating system in history, Microsoft has taken bold moves – not all of them very popular, but usually very effective – to sever the routes of exploit. User Account Control, though controversial, eliminated perhaps 90% of account-elevation exploits. Now the company makes another bold security move – changing how Windows 8 boots to increase security, potentially at the cost of some freedom for certain users and non-commercial developers.

Microsoft Windows 8 will fully embrace a computer security architecture that has been a very long time in the making: the Unified Extensible Firmware Interface (UEFI), created in the 1990s by Intel, and developed later by a consortium that also includes AMD and embedded processor developer ARM. Essentially, UEFI performs the functions that ordinary BIOS used to perform (getting the components of your computer up to speed), but rather than following a set agenda, UEFI works like more of an operating system in itself, making sure your Windows (or Linux, or whatever other) OS is accessible, intact and legitimate before booting it. As with most security changes, though, there are side effects – particularly when working with some dual- and multi-boot Windows 8 machines.

Microsoft has made several demonstrations of UEFI support since announcing Windows 8 last September. The most convincing demonstration involves a thumb drive. Many computers, especially in the office, are geared to look for the presence of operating systems on thumb drives before hard drives, especially for purposes of recovery. Malicious users can plug thumb drive-based OSes directly into victims’ systems, and perhaps gain access to the entire office network. But that’s only if the computer’s BIOS clears the thumb drive’s operating system. With UEFI installed on the motherboard, it won’t.

Many newer PCs already have UEFI in their firmware, and there’s a good chance you’re using it now. But with Windows 8, you would begin using it for what it was built for in the first place: restricting the loading of OSes to those that can prove themselves legitimate and untampered with. Once you install Windows 8 (as our tests with the Windows 8 Consumer Preview confirm), the OS-clearing capability of UEFI kicks in. With some firmware, you can turn off this option. Yet with quite a few systems, once this feature has kicked in, OSes that can’t sign themselves are locked out.

One example, ironically, is a copy of Windows XP that was installed on a hard drive attached to a non-UEFI system. With UEFI fully engaged, you cannot boot to that XP-based drive in a multiboot system. The photo above shows a UEFI screen (not Windows 8, but system firmware) from an Intel Core i5-based 3.3 GHz PC I built. Note the UEFI banner attached to the drive where I’ve installed Windows 8 Consumer Preview. On this particular PC, I cannot (successfully) disengage secure boot. So I cannot boot my Windows XP disk – a fact which gives me only minor trouble with respect to testing software.

What the Lockdown Means

Trusted computing is what the entire commercial computing industry wants… the keyword here being “commercial.” Not all computing is accomplished by commercial entities; and many would argue that some of the most important advances in computing in the last 10 years have come from developers who shunned commercial interests. At any rate, there is a considerable plurality of free software and hardware developers (free as in “free”), many of whom are in the Linux community, all of whom are legitimate artisans. They build computers and systems because they can, and because it’s fun.

Because their community is not centered around vendors or commercial interests, there is no nucleus of authority responsible for what they build. This is how the community wants it. But it becomes a problem when the hardware platforms they rely upon adopt a protocol that equates legitimacy with commercial responsibility. Put simply, if you’re not a vendor, there’s no way you can “sign” your operating system for UEFI. And that may mean you can’t set up a dual-boot system that includes both Windows 8 and a free Linux distribution.

By “free Linux,” I’m referring to any noncommercial distribution, and Red Hat and Canonical have warned their distros (Fedora and Ubuntu, respectively) may be among them (Intel disagrees with respect to Fedora). This is not exactly the swap-meet crowd we’re talking about, but a sizable bunch of legitimate PC users.

In this 10-part series, 26-year veteran Windows tester Scott Fulton walks us through the best features, faculties and functions of Windows 8.

No. 10 : Refresh and Reset

No. 9: File History

No. 8: Storage Spaces

No. 7: Client-side Hyper-V

As Red Hat mobile Linux developer Matthew Garrett first explained last September, “There is no centralised signing authority for these UEFI keys. If a vendor key is installed on a machine, the only way to get code signed with that key is to get the vendor to perform the signing. A machine may have several keys installed, but if you are unable to get any of them to sign your binary then it won’t be installable.”

Over the years, companies such as Microsoft and Intel have maintained that enthusiasts are free to build their systems using platforms that are not UEFI-enabled, and to install free Linux on them. This is becoming harder and harder, as the motherboard industry (Asus, MSI and Gigabyte, among others) has already embraced UEFI firmware. Almost by definition, a motherboard without UEFI is a cheap motherboard. Enthusiasts may like “free,” but they abhor cheap.

More than once, Microsoft has reminded me of the relatively small size of the enthusiast community. But despite their numbers, they are an extraordinarily influential group. Commercial vendors that disrespect them are committing a blunder akin to a politician uttering a racial slur in front of a fellow with a cell phone camera.

Trust, UEFI and How We Got Here

At some point, Microsoft – now one of the co-architects of UEFI – had to take the plunge and support what’s essentially its own work.  The reason why concerns one of the most dreaded threats that every installed copy of Windows still faces.

In fact, for any computing device – be it a PC, a smartphone or a garage-door locking system – a malicious program could overwrite the contents of its operating system kernel, substituting what’s come to be known as a rootkit. In this scenario, when you reboot your device, it isn’t exactly what you think it is anymore. Just how easy it is to accomplish this was demonstrated at the RSA security conference a few months ago, by a team led by McAfee’s former CTO.

When security problems are caused by software claiming to be something it’s not, the solutions usually involve authentication – the implementation of some type of trust system. (Just the word “trust” sends up red flags among veteran IT workers.) In any chain of trust in a computer network, there must be some unimpeachable root that is capable of vouching for the authenticity of everything else. Operating systems are typically vulnerable, and thus serve as poor roots of trust. Engineers prefer the root to be inside the computer hardware, at a more tamper-proof level.  Installing trust in any deep level has rarely been without controversy, mainly on the part of users who have learned from experience that, given the choice, both hardware and software vendors tend to trust themselves above a competitor.

The concept of building a root of trust in the BIOS traces back to 1998, with Intel’s Extensible Firmware Interface created for its Itanium processor-based servers. The idea there was to build a more programmable shell that could effectively manage the system’s transition between powering on and readying the main bus and peripherals, to launching the OS. When regulatory agencies’ scrutiny of Intel began to intensify, Intel turned over EFI to an industry consortium including AMD Microsoft, and embedded processor maker ARM.

From the very beginning, the UEFI group had stated its intention to load operating systems other than Windows. And the first successful field implementation of secure booting with UEFI in consumer-grade equipment was in the first Intel-based Macs. UEFI is already a reality in PCs sold today, and especially in motherboards sold to enthusiasts and system builders (like me). So the issue isn’t that Windows 7 doesn’t already “support UEFI,” or that UEFI by definition locks out Linux. The tools for Linux makers to adopt UEFI protocols are available openly today. So it’s wrong to say UEFI is technically incompatible with Linux. Instead, there’s a kind of “social gap” that the commercial vendors are not willing to help fill.

The Decision

The real question, of course, is does the UEFI flexibility tradeoff affect you? Specifically, does the disablement of your ability to have a dual- or multi-boot system that includes Windows 8 and one or more operating systems that do not support UEFI, impact your ability to work or use your computer the way you want?

  • If you are a part-time Linux user and part-time Windows 7 user, the answer may be “yes.” You may not want to upgrade to Windows 8 until you know for certain you can dual- or multi-boot to your preferred flavor of Linux as well as Windows 8.
  • If you use Linux occasionally, perhaps for testing, then you might consider running Linux from a virtual machine instead of creating a dual-boot system. If you’re testing Linux hardware reliability, though, a virtual implementation probably isn’t a good solution for you. If you’re just interested in the software or in the development tools available with Linux that have no counterpart in Windows yet, you may be perfectly comfortable running Linux from Oracle VirtualBox in Windows 8.
  • If you use Windows XP, and you want to upgrade to a modern PC but keep your XP-based hard drive, the UEFI lockdown could affect your ability to work with XP. You’ll be better off keeping your XP drive where it is, and running it from there.

For everyone else, though, UEFI brings an enormous benefit: the confidence that a rootkit will not be able to substantively change the kernel of the operating system, with the aim of enabling malicious software. In my opinion, for most users, the benefits far outweigh the tradeoffs. 

Of course, I said that about UAC in Vista too, and I found my point of view caricatured in a legendary Apple ad campaign.

Source: Top 10 Windows 8 Features #6: Secure Boot

Top 10 Windows 8 Features #7: Client-side Hyper-V

May 8th, 2012 05:00 admin View Comments

Server-side virtualization is a modern-day fact of life. Today’s data centers pool their processing, storage and even network resources to create macrocosmic virtual-machine entities that transcend the boundaries of hardware. We call that “the cloud.”

With Windows 8, Microsoft extends its server-class hypervisor platform to the desktop. And no, it’s not just a play for enthusiasts and testers: There’s a solid reason for Hyper-V to become the backbone of future Windows versions.

In this 10-part series, 26-year veteran Windows tester Scott Fulton walks you through the best features, faculties and functions of Windows 8.

No. 10 : Refresh and Reset

No. 9: File History

No. 8: Storage Spaces

Being on someone’s Top 10 list, no matter whose it is, does not guarantee you permanent prominence. Put another way, you wouldn’t have to go back very far in time to find an example of a technology feature that seemed make-or-break then, but that today most folks have forgotten even existed.

Three years ago in my ongoing preview of Windows 7, I placed a feature called XP Mode #3 on my Top 10 list. Today you might look at that pick and conclude, “Boy, was Scott ever wrong about that one!” The point is, though, that sometimes a feature is right for its time, even if it can’t weather the test of history. This may be the case for my #7 pick for Windows 8.

We introduced you to client-side Hyper-V during ReadWriteWeb’s coverage of the Build 2011 conference last September. It’s Microsoft’s industrial-strength, 64-bit virtualization platform for running operating systems and applications within an envelope that behaves like hardware but is made of software. Virtualization is the principal technology that makes cloud computing possible today. Being able to use resources made up entirely of software, whose capacity or power is provided by any number of real resources, is the basis of virtualization.

Many observers say desktop virtualization is solely for hobbyists and experimenters. But there are plenty of real-world, everyday use cases for desktop virtualization. If you work from home even part-time, for example, and your job involves connecting to your office server, you could do both yourself and your office a favor by connecting only through a virtual machine (VM). This way, your PC still belongs to you. Anything your office forces you to download and use, such as a proprietary VPN, can mess up only the configuration of your virtual (disposable) machine. You can still maintain your personal office files on real hard drives. Anything you may happen to download that may be contrary to your office’s stated policy, or that could impact your office’s assets, is separated from your office VM.

In short, if you’re tired of your office’s IT department messing up your home computer even more than your cable company already has, virtualization can put a stop to it.

Acknowledging the Gap

The virtualization market has been a difficult path for Microsoft. To be a virtualization leader in the data center, you have to have a system for representing virtual resources as a file, and then have a management platform for deploying, maintaining, moving and scaling those resources. VMs don’t have to behave like real machines; theoretically, their storage bases can expand or shrink as necessary, and even their internal memory can expand.

Hyper-V was created for Windows Server 2008, although Microsoft already had a virtualization platform with two main variations (and several nuanced versions). It was a 32-bit platform gained through the 2003 acquisition of Connectix, and its first incarnation for Windows was called Virtual Server 2005. It was described at the time as a “server consolidation tool,” and although administrators could create theoretical virtual networks with IIS on one server and SharePoint on another and Exchange on a third, the often blind-faith management process reminded put admins in the position of being a contestant on The Dating Game, with the available prospects hidden behind a wall.

On the client side, however, Microsoft made the brilliant move of re-releasing Connectix’ Virtual PC (VPC) technology for free, effectively nullifying all the minor players and reducing its serious competition to VMware, Xen (later acquired by Citrix) and Sun’s VirtualBox (later acquired by Oracle). The last great culmination of VPC came in the form of XP Mode for Windows 7, which could run a Windows XP application in an XP window in conjunction with the Windows 7 Desktop. This way, you didn’t have to change worlds to run an old program in a new environment. (Strange how even good ideas don’t always stand the test of history.)

But there are serious and irreconcilable technical incompatibilities between Hyper-V and the old VPC.

  • First, Hyper-V is designed for a cloud-based reality, in which resource sizes – including compute power – are scalable. As a result, a virtualized server can be dramatically more flexible than a physical one. By contrast, VPC was built in an era where the magic of emulating a PC with 256 MB of RAM and a 10 GB hard drive seemed limitless enough.
  • Second, the configuration files for VPC and Virtual Server (which were not compatible with one another, if you can believe it) are not compatible with Hyper-V.
  • Third, the components of a virtual machine that enable it to recognize itself as a VM cannot be the same when you move the VM to Windows 8, because of the differences in the Windows 8 kernel.

So as Microsoft transitions its PC-based clients to a fully 64-bit realm, there can be no seamless transition between XP Mode for Windows 7 and a similar feature, if there was to be one, for Windows 8. As cool as it seemed in 2008 for an XP window to run friction-free inside Windows 7, it’s simply not a feature that folks really need today.

Crossing the Gap

So it’s getting left behind. At the same time, as Microsoft adapts Windows 8 for a more dynamic world of tablets and portable devices, the new system won’t be suited to run inside a VPC envelope.

This does not mean you cannot run virtual operating systems that you created within VPC-based virtual machines in Hyper-V for Windows 8. Microsoft’s virtual hard disk format (VHD) is still very much supported in Win8. In fact, the way you mount a VHD now is to simply double-click the VHD file in Explorer. So your existing assets, if they’re still important to you, are not lost; the trick is to create a new Hyper-V virtual machine in which the VHD file is installed.

Once you get Windows 8 set up and working, your first step in the new Hyper-V Manager will be to create a virtual switch – a kind of bridge between the network loop for your VMs and your real network card. The trick here is remembering that it’s an internal switch to an external network. So you choose Internal first, and then choose your (real) network card in the list marked External network, as I’ve done above. If you remember this strange distinction, it’s easy.

The process of creating a new virtual machine is actually the creation of a configuration file that links the specifications for the VM you want to run, with the file acting as its hard drive. So you connect the VM to the bridge you created a moment ago, as shown above.

Then, since Hyper-V accepts pre-existing VHDs, you use this step to point it to the VHD file. When you engage the VM for the first time, it will boot whatever OS you’ve already installed on the VHD. Keep in mind if the existing OS is Windows, it will behave as though you’ve taken the physical hard drive out of an old computer and reinstalled it into a new one. This will trigger the Windows Activation process because it will see what it believes to be an entirely new processor and memory. Microsoft has granted automatic licenses to people using virtual Windows on any licensed copy of Windows on a physical PC, so unless there’s something technically wrong (which can happen, as you’ll see in a moment), activation will work.

The screenshot above shows an existing Windows 7 VM whose VHD was created in VPC, and a Windows XP VM whose VHD was created way, way back in the Vista days, running simultaneously on the Windows 8 Desktop. It’s not a seamless fit – at least, not yet.

  1. You can’t get the semi-transparent “Aero” environment for Vista and Win7 in Hyper-V, at least not through Windows 8’s new Virtual Machine Connection program. (Theoretically, you may be able to emulate Aero by connecting to a running VM using Remote Desktop Connection. In our tests of the Win8 Consumer Preview, however, Aero did not work yet.)
  2. Both Win7 and XP VMs will require you to uninstall any existing Virtual Machine Additions (VMA, which was created for VPC) before they can install network connections that make them behave like part of your network. This works fine for Win7, but for XP – which is more finicky about product activation – once it realizes it’s not in the same environment, it will insist that you re-activate Windows first. Which you can’t do if you don’t have a network connection, and the conundrum proceeds from there. So if at all possible, you should use your old VPC in your old operating system to uninstall VMA before you move your VHD into Hyper-V. (Enough V’s for you?)
  3. When creating the virtual network bridge between Hyper-V and your real network, Windows 8 takes your network offline for the briefest moment. But in that moment, the Win8 Consumer Preview will tell you the network is completely offline, even when it’s not. This is obviously a “beta bug.”
  4. Transitioning a VM created exclusively for Win7′s XP Mode (which sublimates its own Desktop) into a Hyper-V VM (in which the Desktop is visible) will likely be difficult, and not exactly automated. There’s a relationship between the XP account system and that of Windows 7, which no longer applies to Hyper-V in Windows 8. You may prefer to simply create an XP-based VM from scratch, except that to do that, you’ll need to relocate your long lost Windows XP CD-ROM, and undergo the excruciating XP setup procedure that Vista successfully exorcised.

Anybody who uses Windows for any length of time will come to appreciate the ability to utilize a program in a safe-for-testing environment – in a situation where anything that can go wrong doesn’t impact your “production” machine. I have said for years that in the future, all Windows operating systems should be virtual. That is, Hyper-V should serve as the hypervisor layer for everything you do on your PC, tablet and, one day soon, your phone. This way, such affairs as system backup and restoration will become trivial. Imagine if viruses could impact only your running virtual PC, and the physical layer beneath it could detect when and where that impact occurs.

Virtual computing is true computing, and if Microsoft wants to remain competitive, it must inevitably embrace the need for an abstraction layer between hardware and software across the board. The company chose to concentrate on other priorities with this go-round. But while putting Hyper-V in client-side Windows 8 is not the complete solution, it’s a step in the right direction. 

Source: Top 10 Windows 8 Features #7: Client-side Hyper-V

Top 10 Windows 8 Features No. 8: Storage Spaces

May 3rd, 2012 05:00 admin View Comments

Your typical hard disk drive today measures its capacity in the half-terabytes. If you’ve got a 250GB drive – or maybe a half-dozen of them – you may think they’re not good for much anymore. But what if you could use them to build a private cloud just like those OpenStack folks in the enterprise? In Windows 8, Microsoft is bringing the power of private clouds to consumers with the inclusion of Storage Spaces.

In this 10-part series, 26-year-veteran Windows tester Scott Fulton walks you through the best features, faculties and functions of Windows 8.

No. 10 : Refresh and Reset

No. 9: File History

Make Your Own Cloud

For years I’ve said that the next great version of Windows will be a deliverer of cloud service, and I’ve had folks from Microsoft tell me, “Oh sure, that’s what we’re doing, look at how we sync your photos with SkyDrive!” Indeed, syncing was pretty cool from a 2009 perspective, but Dropbox and Box.net are becoming nearly ubiquitous now. So with respect to Windows, syncing may already have become a “me, too” service.

Real cloud technology, when you get down to brass tacks, is about services and resources being provided to you in a logical fashion that’s separate and distinct from their physical locations. So when we talk about “public clouds” and “private clouds,” we’re referring to resources provided in big pools, on a metered basis, via the Internet (compared with big pools of resources – like hard drives – collected together in your own office). If the next Windows is to have any hope of greatness, it needs to start delivering some private cloud technology – ways for you to collect your processing and storage power together. One glimmer of hope for Windows 8 comes from Storage Spaces, a way for you to finally build one storage volume out of many.

The story of Storage Spaces in Windows 8 is not quite as simple as you may expect it to be (which explains the story of my life), but it makes sense if you stick with it a few minutes. With “LUN Provisioning,” you create a logical storage unit (using logical unit numbers, or LUNs) by collecting storage devices together – not volumes on those devices, mind you, not by “joining volume D: with volume E:” but by surrendering devices in their entirety to the collective. A new kind of virtual volume is created, not by slicing each device into segments but by joining the devices together under a single file system. It’s this notion that distinguishes LUN provisioning from pooling, which is a simpler system for collecting volumes together, but which is more susceptible to failure.

Screenshot of Storage Spaces in Windows 8 Consumer Preview

When you provision a Storage Spaces LUN, you’re effectively formatting the drives contained within it. There’s no choice about this, because you’re replacing the file system normally used for the drive with a completely different, more resilient one. In fact, you might even consider creating a LUN around just one drive, if that’s all you have, simply because you’d be increasing the file system integrity for that drive. (You just might find me talking more about this little fact in the near future.)

Redundancy for Redundancy’s Sake

Interestingly, the logical size of the joined devices in the LUN does not necessarily have to be the sum of their respective capacities. It can actually be larger. That may sound crazy, but it’s a characteristic of many cloud technologies, and you just have to get used to it. The idea is that you provision your storage pool with the maximum size you expect to actually ever need. You could have three 2TB drives, though you can actually provision a LUN that encompasses those three with 15TB. This technique is called thin provisioning, and it sounds a bit like a lie told by comedian Jon Lovitz. (“Why, I’ve got ni-… te-… fifteen terabytes! Yeah, that’s the ticket.”)

Bear with me. In this example, each Windows LUN would make maximal use of the 6TB it has available to it, while it pretends to be a 15TB volume. Although enrolling three devices in a single LUN would appear to increase the theoretical chances of a read or write failure by 3, Storage Spaces will replicate data across the devices for as long as it can. It’s like RAID, except at the operating system level, implementing your choice of redundancy methods.

When you do have more than one device, mirroring is like RAID 1. It creates a redundant arrangement of “slabs” (in this case, 256MB data segments) across either two or three devices. Striping is like RAID 0, in which data slabs are equally distributed across as many devices as there are in the LUN. If you’re going to use striping, you may as well choose parity. This method stripes information across the total number of devices in the LUN minus one, and then stores extra parity data in the leftover device. This way, in the case of device failure, the missing segments may be mathematically recovered.

It’s Bigger on the Inside

Now, all of this doesn’t quite complete the answer to this question: Why would you want to provision a LUN for a larger size than the sum of its physical capacities… especially since Storage Spaces can add more data to the storage system than the files actually consume anyway?

The answer has to do with… the future! You may not own all the storage capacity you eventually need, especially if you’re running a home network in a family where you record a lot of HD video with Windows Media Center. Thin provisioning gives you a reprieve, letting you add new devices when you need them. You’ll need them soon after Windows 8 tells you so – as your consumed capacity leaves less and less room for redundancy.

Storage Spaces are for storage devices that are directly attached to one PC in your homegroup, either internally (SATA, SCSI or iSCSI interface) or externally (SATA, SAS or USB). It doesn’t work with network drives that are mapped to a volume because – as hopefully you understand now – Storage Spaces is not a collection of volumes. Since a network drive can be accessed by multiple PCs, it can’t be part of a LUN that’s attached to just one of those PCs.

For now, you can’t use Storage Spaces to create a system volume. The service that runs Storage Spaces is a formal service like the network stack, and Windows must be running to use it. For every PC I’ve ever built, I create at least one separate drive to store data and media files anyway. It might sound silly, but conceivably, I could use Windows 8 to create a storage space around just that one extra drive, with “striped + parity” storage (it’ll “stripe” over one disk, but at least there will be parity), and then thinly provision it for far more than its native capacity. That way I can add a new device to the mix whenever I feel like it (or can afford it), just by plugging it into the USB or external SATA port.

Resilience architecture is changing the way we compute, and it’s helping once again to distinguish the power of the PC from one of these other, cutesy, newfangled devices you see floating around. I’m looking forward to being able to build my own cloud on my desk whenever I need one.

Source: Top 10 Windows 8 Features No. 8: Storage Spaces

Bullpen Capital’s Duncan Davidson on VC Funding and “The Era of Cheap”

April 26th, 2012 04:00 admin View Comments

For a technology startup company to launch an initial product and survive long enough to gauge its success, requires two-orders-of-magnitude less money today than at the start of the previous decade. This from a man who knows from having made bigger investments that are smaller: Duncan Davidson, Managing Director of Menlo Park-based Bullpen Capital.

Bullpen is one of a growing number of early-stage funds – or, perhaps more accurately, “earlier-stage,” since even his is no longer the first out of the gate. In an interview with ReadWriteWeb, Davidson explains how the latest industry to receive the disruption treatment has been the venture capital business itself, with the epicenter of the quake right around San Francisco.

For more of Duncan Davidson’s insights – along with those of five other top-tier VCs – download Scott Fulton’s exclusive 14-page report: “Growing Your Business In The Modern Economy: 6 VCs Weigh In.”

“There used to be the Four Horsemen of the IPO world, back in the ’80s and ’90s. They were some great, small banks: Robertson Stephens, Hambrecht & Quist, Alex. Brown & Sons, Montgomery Securities. They all went away; they got rolled up in 1999 and 2000 into these too-big-to-fail banking operations,” Davidson tells us. “The question is why? The answer is, they couldn’t make any money trading, [or on] giving analyst support for small stocks because the spread got too small.”

The spread he’s talking about is the decimalization of the U.S. stock market: The change led by NASDAQ in early 2001 to valuing stocks in increments from one-eighth of a dollar down to one cent. “When you make the spread a penny, only very high-volume stocks can garner economic value for the banks to be worried about it,” says Davidson. Firms that gave analyst support for small stock issues used to provide price target projections that had some meat between the bones because their highs and lows were separated by eighths of a point.

Believe it or not, this is where the change begins. When small fluctuations in stock value changed from 12.5¢ down to 1¢, the meaning of small stock got smaller. Decimalization forced analysts to tighten their spreads, the result being that their analysis looked like they were wasting their time commenting on penny stocks. It became unprofitable for the Four Horsemen to continue doing business independently. When they exited the scene, the small IPO market followed suit.

What followed is what the VC industry now calls “The Era of Cheap,” and we’re still very much in it.

Number of U.S. IPOs by year, 1980-2011, with pre-IPO last 12-month sales less than (small firms) or greater than (large firms) $50 million (2009 purchasing power). Credit: Professor Jay Ritter, for testimony before the Senate Banking Committee

The way startup funding worked at the turn of the decade, he describes, small firms would raise “seed round” capital from friends and family to get off the ground. From there, they’d make the leap to “Series A,” which used to refer to the first round of institutional capital invested. Davidson says, “Those are the funds you’ve all heard about: Sequoia, Kleiner Perkins, Mayfield and a whole bunch of great funds. The reason you had to do that back then was because it took $5 million to launch a company into the marketplace, or at least to get the technology done and see [how it performed].

“What’s happened in the last decade, the cost of launching an Internet product – forget other technologies, we’ll focus on Internet – has dropped from $5 million to $500,000 in 2005, to $50,000 today,” he continues. “Two orders of magnitude. That’s why a couple of kids in their dorm room can start a company, launch it, and see if anybody cares out there. With that great decrease in cost to launch something, you have the emergence of all these new funds. The emergence has been extraordinarily traumatic and disruptive on the venture industry.”

By Bullpen’s count, at least 80 new firms have been formed recently with the intention of helping newer firms raise smaller amounts prior to what’s still called the “Series A” round, though it may have become the fourth or even fifth rung on the totem pole.

A startup (or what the newly passed JOBS Act calls emerging growth companies, or EGCs) may not need a seven-digit investment until what’s called the “seed round,” but what’s really round three after the “accelerator round.” Bullpen may come into play with about $2.25 million on average, before “Series A” even begins.

“It’s all driven by the Era of Cheap,” says Davidson. “It’s all because it’s a lot less expensive to start a company than it used to be, so you keep the amount of funding in much, much less, and as long as you can before you go for big money.”

Davidson’s company explains that backers are looking for companies that can sustain leaner growth models. It’s still growth, mind you, but it’s more efficient, less wasteful, and a bit less self-assured. A company can validate its growth metrics, Bullpen believes, in as little as four to six months after its initial funding periods.

“You don’t know if these deals are real; it’s all an experiment. And the leaner you keep it, the more options you have with what to do with it. Once you put a lot of money in, you’re no longer lean and flexible and kinkin’ and jivin’ and trying to figure it out,” he says. “Now you’re on a bit of a race to actually prove out the value of the money you raise, and scale it. When you keep it lean, everybody has better options, and a better outcome when all the dust settles.”

The benefit for Bullpen, and other investors in its space, is risk avoidance. Having a little less money to burn drives the EGC to get its product to market on time. Then the risk of that product succeeding is lessened by network effects made feasible by app stores and digital distribution, driving up customer adoption. Risk to the company’s business model in the early stages is reduced by adopting this app store “template” that customers worldwide have already embraced.

“Look at Instagram, which got bought by Facebook. How old is Instagram? It’s not a very old company, and it just sold for a billion dollars. Zynga goes from zero to multi-billion-dollar company in four years. There’s been two fundamental changes here that have been overlooked by a lot of people, and this is why it’s fundamentally different than it was in 1999: One change is this Era of Cheap and the lean finance model… Everything’s better with the lean model.

“Money, in effect, can be a drug – it can be a problem,” he continues. “You get too much money, you lose flexibility instead of gaining it. You pay yourself too much money, you sort of relax, you don’t have the same urgency – all these things happen when you take too much money in. In a lean model, you’re in a race, you’re worried, you’re not there yet. You’re at a constant level of anxiety, and it makes you more agile and responsive – a lot of good things happen.”

The second change comes from the globalization of technology markets made possible by the Internet, as Bullpen’s Duncan Davidson explains: “You’re not just selling to a few companies in the U.S.; you’re selling to a huge consumer marketplace. So a franchise can evolve extraordinarily quickly. Bill Gates once called this the ‘friction-free economy,’ and it’s pretty damn close. This explains why a company like Zynga or Groupon or Instagram can go from nowhere to a very valuable company, extraordinarily quickly, much more rapidly than ever before. The changes are: It’s very cheap to do things, given modern technologies, and the global market makes a potential win absolutely huge – faster, bigger than we ever imagined before.”

(NOTE: For more of Duncan Davidson’s insights – along with those of five other top-tier VCs – download Scott Fulton’s exclusive 14-page report: “Growing Your Business In The Modern Economy: 6 VCs Weigh In.”)

Source: Bullpen Capital’s Duncan Davidson on VC Funding and “The Era of Cheap”

Top 10 Windows 8 Features #9: File History

April 23rd, 2012 04:00 admin View Comments

The disk maintenance tools that Microsoft ships with Windows have always been, at best, tolerable. Now that there’s an entire industry centered around archival storage systems and services, it’s about time Microsoft gave its consumer versions of Windows a file archiving system appropriate for the 21st century.

Replacing Windows Backup, Windows 8 File History is the file archiving system that should have been in Windows 7 – and it points the way toward a post-PC future for Microsoft.

In this 10-part series, 26-year veteran Windows tester Scott Fulton walks you through the best features, faculties and functions of Windows 8.

See #10 : Refresh and Reset

Ironically, the component that provides this functionality actually is part of Windows 7, and has been there since Windows XP. Windows has actually had a versioning system for files for a few years now. It’s called Volume Shadow Copy, and it’s a way for the system to maintain multiple backups of a file for different points in time. Windows 7 uses VSS (yes, that’s the correct abbreviation… go figure) to back up certain system files that may need to be called back from the archives when you execute a System Restore – when you undo changes to the system, rolling them back to an earlier point in time.

Windows Backup can use this service… kind of. I don’t know anyone who actually does this, but once you back up a folder to an archival device, in Windows Explorer you can right-click on a file in that folder, click on the Previous Versions tab, and recall an earlier version of that file. Of course, if you’re going to restore an entire subdirectory full of documents, right-clicking and restoring each one this way is not a great way to spend one’s weekend.

This is what a developer would call a service without a real interface. The real UI for this service has been added to Windows 8, and yes, folks will note that its inspiration probably comes from Mac OS. It’s called File History, and it replaces Windows Backup. That name change alone will confuse some folks, and perhaps a shortcut from Backup to File History would help in the final edition.

The idea behind File History is this: If you continue to use Windows 8 the way Microsoft wanted you to use Windows 7, then you’ll have bound your important personal folders to libraries. Your Office documents will be in Documents, your digital camera files will be in Pictures, your downloaded and transcribed videos will be in Videos. So it’s silly to have to tell a different Windows program which files are important to you if you’ve done it once already.

File History already knows what’s important to you. If there are files in your libraries that don’t need backing up, you have the opportunity to make exceptions by clicking on Exclude from File History and adding exceptions into the box above.

This becomes important for the following reason: File History is designed to be something that works continually – not every week or overnight like an ’80s-style backup, but every hour. The suggested use case of File History is for you to plug in an external hard drive via USB, but you can also map a cloud-based Microsoft SkyDrive location to a Windows network share. File History instantly becomes a cloud backup service. For Windows Phone users, that could mean anything you save to your media libraries will be automatically synced and available to your phone.

It also raises the question of bandwidth consumption. If you frequently copy over videos from your camcorder (as opposed to just your phone), then you won’t want to use File History as a cloud backup service for that purpose. Unfortunately, that may impede your choices for backing up smaller media files and everyday documents – your choice of backup devices may apply to all your backed up files. If you’ve decided to use an archival drive like a Seagate FreeAgent to archive your camcorder videos, you won’t be able to use File History to back up your documents and tunes.

Once again, the tool that’s shipped with Windows is not perfect for all situations. Just as it’s been for the last three decades, that leaves an opening for third parties backup providers – like Acronis, for example – to build a market.

Still, the evolution of Windows Backup into File History is important as the nerve center of people’s digital lives shifts away from the PC. If Microsoft wants to maintain a handle on its customers’ everyday life and work, it needs to stake a firmer claim on the services and tools that bridge all the devices that are at the nerve center now.

That means Microsoft needs a stronger cloud presence. SkyDrive is nice, but it’s not as powerful a product as Dropbox or Box.net. The new File History is a compass pointer in the exact direction that Windows needs to go: toward a service that transcends both PCs and devices. As I’ve said before: Not Windows Phone, not Windows PC. Just Windows.

Source: Top 10 Windows 8 Features #9: File History

Top 10 Windows 8 Features No. 10: Refresh and Reset

April 23rd, 2012 04:00 admin View Comments

Yes, there really are 10 important and beneficial changes you’ll find in Microsoft Windows 8, beginning with Refresh. Let’s just say it’s closer to perfect than Windows Backup. Refresh is Microsoft’s first real attempt to address Windows’ most touchy consumer pain point: Reinstallation as a solution to problems that no one can diagnose or understand. Now, there’s a chance that with this partial installation feature, you can have Windows start over without losing absolutely everything, including your applications and the files in your libraries.

In this 10-part series, 26-year veteran Windows tester Scott Fulton walks you through the best features, faculties and functions of Windows 8.

Perhaps you’ve seen the famous comic posted to Oatmeal.com titled How to Fix Any Computer. Not to give away all the secrets of the comic’s trenchant forensic analysis, but Step 2 of the Windows side of the equation is unfortunately familiar to just about any Windows user: “Reformat hard drive; reinstall Windows.”

A PC operating system is like steel wool. You can’t use it in even the slightest way without mutating it. Installing a new program typically alters the System Registry, which to many Windows veterans even looks like steel wool. Inconsistencies in the Registry can affect the entire system, and much of the last 17 years of Microsoft’s development of Windows has been devoted to adjusting, accounting and compensating for these discrepancies so that folks don’t have to reinstall Windows every time something goes wrong. System Restore (a form of which premiered with Windows Me) was created to overwrite a newer, possibly damaged Registry with an older, hopefully undamaged copy, in hopes that the system could pretend the changes suspected of damaging the system never happened.

It’s taken well over a decade for Microsoft to guide the evolution of Windows software to a state where applications are maintained separately from the operating system. We’re not quite there yet, though we’re close enough now that Microsoft feels comfortable introducing this useful “partial reinstall” feature to Windows 8.

Called Refresh, it’s based on the Windows Image Manager (WIM) services introduced with Windows Server 2003 R2. Refresh replaces the kernel files in Windows, overwriting the existing installation with a new and freshly compiled image of the operating system. But using WIM, it adds back the separately maintained components from the new architecture of Windows 8, including existing WinRT “Metro-style” apps you’ve already installed. So you don’t have to start all the way at square one, though it won’t necessarily take you all the way back to where you started.

Test Setup

My test computer for this experiment is a quad-core Intel Core i5 2500K desktop system. I purposely did not use a fresh Windows 8 Consumer Preview installation, but rather one where I’d made software installations and changes, including:

  • Microsoft Office, after I authored several documents and made some settings changes. I expected to have to reinstall Office, but I wondered whether it would remember me afterward;
  • The Visual Studio 11 beta, plus some of the SDKs that go with it;
  • Mozilla Firefox, along with some settings changes that I believed should get stored in a safe directory that survives the operation;
  • A third-party screen capture utility called Screenshot Captor whose stored settings I also suspected should survive;
  • A third-party utility I use to install software directly from ISO images of discs, called Virtual CloneDrive;
  • Some Metro-style apps.

I also made the kind of adjustments that an ordinary user would make. I changed the Desktop wallpaper – which, for Windows 8, applies only to the Desktop mode. I moved a Vista-style Desktop clock gadget from the left to the right of the Desktop (would it get moved back left, or would it disappear altogether?). I also changed my Metro and Start screen background color from crimson (go Sooners!) to cyan (umm, go Seahawks). And I tweaked the network settings to add more sharing features to my homegroup.

I chose these actions to give me clues as to what parts of Windows get overwritten during Refresh. Since this operation will probably be undertaken only when Windows is acting weird, you should actually want some parts of Windows to be wiped clean. If too many things survive, the bad behavior might survive as well.

Test Results

The Refresh process consumed about 20 minutes – a little slower than the original install, but still well within reason. It wasn’t long ago that a Windows XP installation could take more than three hours.

For the Windows 8 Consumer Preview, here’s what I noticed immediately after Refresh:

1. My Desktop wallpaper was the same, as were customizations I had made to Windows Explorer (such as “View hidden files and folders”). This indicates that at least some Registry entries survive Refresh. Not all entries, however: My Control Panel, which I intentionally reset to View by Large Icons, reverted to View by Category. Administrators will want a hard-and-fast rule for which settings need to be checked and reset, and can be counted on to survive Refresh.

2. My Desktop gadget disappeared, suggesting that contents of the Windows folders were completely overwritten. That’s actually good, because misbehaving components may be parts of system drivers or even malicious actors placed in system folders to feign legitimacy.

3. The Metro color reverted to crimson, even though my Metro background color remained cyan as the Refresh program was announcing success for the initial sign-in. Easy enough to change back, but the point is that the surviving data for the Desktop and the victim data for Metro appear to have come from two different places.

4. The Metro Start screen and Lock screen pictures stayed the same. Evidently, Metro doesn’t store all of its data in the same place. This may pose some interesting problems with respect to System Restore (not Refresh), and the possibility that rolling a system’s status back to a previous restore point may not restore Metro with the same integrity as it restores the Desktop.

5. The homegroup had to be rejoined, although Metro did remember the homegroup password. This is important for a slightly esoteric reason: If you have a dual-boot PC that also boots with Windows 7 (many people will, and I do), changes you make to the folders that Windows 8 includes in shared libraries appears to impact whether those same folders remain shared in Windows 7. I have no clue why or how this is so, but since Windows 8 folders start life as private and unshared (as they should be), rejoining the homegroup may mean you have to make adjustments in Windows 7 the next time you boot it up. This quirk may go away with the final edition of Windows 8.

6. Third-party application data (“AppData”) was cleaned from its hidden folder. That’s both a surprise and a big deal, because it means that not only will you have to reinstall your programs, but you’ll have to start over with your settings. If you had a huge store of bookmarks in Firefox, and you weren’t syncing it via some cloud service, it’s probably lost. And if you had certificates, serial numbers or other data affirming your rights to use commercial software packages, they may be gone as well.

7. The contents of users’ Documents folders remain intact. This is as Refresh’s warning promises. When I installed the Visual Studio 11 beta, I had it create a number of sample files. Even though I typically keep my “My Documents” folder on a completely separate drive (which has saved me more times than I can count), VS tends to put its help files in the local “My Documents” folder on the system drive anyway. That said, they were intact, even though Visual Studio itself was not. Still, this is a good thing, because getting VS functionality back from here takes only minutes.

8. There’s a nice “Removed Apps.html” file on my Desktop listing everything that Refresh had to remove, which I can print out and use as a checklist.

9. My IE10 Desktop home page reverted to Bing.com. Yeah, I caught that, Microsoft. Sneaky devils.

10. My Windows ID remained intact, as it now must in order for me to be able to log onto Windows 8 again. This also means my Xbox Live account remained intact, and Windows 8 games like PinballFX were able to sign in for me. For some users, having the Xbox Live profile will be item No. 1 on the checklist. However, things stored locally – such as my high scores on PinballFX – did not join the party.

One of the first programs I had to reinstall was Screenshot Captor, in order for me to take pictures for this report. The program was able to find my saved settings, which do not rely on the Registry. On the other hand, Firefox thought it was being installed in a completely clean system. So evidently, the rules for reinstalled programs needing a re-education in Windows will be… complicated.

Refresh or Reset?

The big question for many users will be whether Refresh will be any more of a timesaver than what Windows 8 now calls Reset. Reset is a fast way to start over with a completely new and unadulterated installation of Windows 8. Whatever settings you may have had are removed.

The “Reset your PC” warning (shown above) states, “All your personal files and apps will be removed.” That’s not exactly correct. If you sync your files using cloud services such as Apple’s own iCloud, Box.net or Dropbox, or if you keep your important files on a separate drive, naturally, those files would survive even a Reset operation.

Reset wipes the My Documents folder when, and only when, it cohabits the same device as the system folder. Microsoft should consider selectively revising this warning. Perhaps: “All your personal files and apps stored on the same device as Windows will be removed.” Even that may be harsh, because (we’ve been told) the new Windows Store may be called upon to reinstall lost Metro apps.

For me personally (this will not be the case with everyone), the difference between Refresh and Reset is minimal. Reinstalling Windows has become as common for me as, say, washing the car. If I have a tool that polishes the chrome for me so I don’t have to, I might appreciate it a bit, but I won’t call it a lifesaver.

But for most Windows users, anything that reclaims an hour or more of valuable work time will be as good as gold. If Refresh works as well in the final edition of Windows 8 as I believe it will, fewer folks may find themselves, as the Oatmeal chart so quaintly put it, quietly weeping.

Source: Top 10 Windows 8 Features No. 10: Refresh and Reset

IBM VP Anjul Bhambhri: Must Big Data Alter the Enterprise?

March 8th, 2012 03:30 admin View Comments

IBM stop story badge.jpgThere’s a public relations brochure template someplace that reads, “________ is changing the way the world does business.” If this were a Mad-Lib, you could insert the proper noun of your choice. Historically, evolutionary changes in both business and the economy that supports it, have mandated the need for subsequent changes in technology. There are certain very notable exceptions (thank you, Tim Cook), but let’s be honest and admit that databases didn’t spring up from gardens like daisies and change the landscape of business from winter into spring. There was a need for relational databases that went far beyond keeping up with the competition.

So when companies say that big data will change the way you work… really? Is that the best value proposition that vendors can come up with – “It’s coming like a thunderstorm, so you’d better be prepared?” In the final part of ReadWriteWeb’s conversation with IBM Vice President for Big Data Anjul Bhambhri, which continues from part 2, I told her a true story about a customer on a vendor webcast that was set in its ways and resisted the change that the PR folks were saying was inevitable.

Scott Fulton, ReadWriteWeb As you probably know on a deeper level than I, the reason for database siloing dates way, way back to the 1970s and ’80s when computing products were purchased on a department-by-department basis. Way back in the mainframe era – which IBM helped the world inaugurate (so it’s your fault) – computing products were purchased, deployed, configured, programmed by the people in finance, in budgeting, in human resources, in insurance management, in payroll. And these were all disparate systems. The archival data that has amassed from this era is based on these ancient foundations, which it only seems to make sense to those who developed software for a living to say, “We’ve got the power to make it all fit together now, why not use it?”

But I was on a webcast the other day listening to a fellow for about 60 minutes, making exactly your case. Why we should remove silos from big organizations, and make the effort to develop ways of merging big data into “usable meshes,” he called them. It was a good point and it lasted for 60 minutes. And the first question he got from somebody texting in was, “Simple question: Why?” And the presenter said, “What do you mean, why?” And he said, “Okay, don’t you know that these silos exist for a reason? Businesses like ours [I think he was in banking] have departments, and these departments have controls and policies that prevent information from being visible to people in other departments of the business.” And he asked, “Why would you make me spend millions of dollars rearchitecting my data to become all one base, and then spend millions more dollars implementing policies and protections to re-institute the controls that I already have?” And the presenter was baffled; he never expected that question, and he never really answered.

So I wonder if that question has ever been shot your direction, and you ever batted it out of the park?

Anjul Bhambhri, IBM (400 px).jpgAnjul Bhambhri, IBM: What you said, I agree with that completely. There’s a reason this has happened. And it doesn’t matter what we do; you can’t just get all this data into one place. Data is going to be where it is in an enterprise. There may be department-level decisions that were made, department-level applications that are running on top of it. And nobody’s going to like [some guy coming in saying] “Let me bring this all together.” It’s too much of an investment that has been made over the years. In hindsight, we can always say this is the way things should have been architected. But the reality is that this is how things have been architected, and you run into this in almost all the enterprises.

If they start ingesting data from these sources, they have to be cognizant of the fact that they could be dealing with very large volumes. So they don’t want the whole IT infrastructure to collapse because they didn’t anticipate what hardware they should have in place.
Anjul Bhambhri, VP for Big Data, IBM

My response and suggestion – and we’ve actually done it with clients – has been that, you leave the data where it is. You’re not going to start moving that around. You’re not going to break those applications. You’re not going to just rewrite those applications… just to solve this problem. Really, data federation and information integration is the way to go. Data is going to reside where it is. IBM has done a very good job in terms of our federation technology and our information integration capability, where we are able to federate the queries, we can pull the right set of information from the right repositories wherever it lies. Then we can obviously do joins across these things so that we can do lookups of information in maybe the warehouse, and we can correlate it with information that may be coming from a totally different application. And all of this is done while preserving the privacy and security, the accessibility, the role-based policies that may have been implemented. We can’t ask people to change all that. We can’t have departments just start changing it. If there’s some data that they don’t want another department to see, then that has to be respected.

Even in the big data space, you can imagine this is a question which comes down a lot from the big enterprises that have made huge investments in these technologies. They’re not going to have one data repository. It’s all a heterogenous environment, and it’s going to continue to stay that way. That is not going to change, nor do we expect it to change.

Also, you don’t really want it to change, right? People have built those repositories and those applications because they were the best choices at the time for that class of applications. Or they may have bought solutions from vendors like SAP, or they could be ERP or CRM systems that they have bought from various vendors. Those cannot all be thrown away. If the companies were using CRM applications, for example, to really understand aspects of the customer, now we want you to continue to use that, you don’t stop using that application. But you may need to augment the information that you can get from a CRM application with what maybe social media offers around the customer, so you can really get more like a 360 view of the customer. Don’t abandon what you’ve got, but integrate. Be able to bring in these new data sources, and the level of tooling [necessary] to be able, in that single dashboard, to pool the information from these CRM applications [and] from these new data sources that may be Facebook or Twitter or text messages – to correlate this information and maybe show aspects of the customer where, if you were only looking at the CRM application, would be incomplete.

I really think federation and integration is the way to go here, and not dictate that data be moved or be in a single repository. Heterogeneity is a reality, and we have to accept it and provide the technology that actually takes advantage of that heterogeneity, and respect the decisions that the customers have made.

Scott Fulton: You believe the emerging tools that we talked about earlier, that we will need the data scientists to effectively learn how to use, will be tools that won’t change the underlying foundation of data as we currently have it, but simply add a layer of federation on top of that?

Anjul Bhambhri: What is happening behind the scenes – To the data scientists, we really just want them to focus on that. Their expertise is needed with other data sources that are important to the organization. Given their subject matter or domain expertise, they are the best ones to recommend where else is information needs to be gleaned from. And then of course, the IT group has to make sure those can be dictated, plotted, in the data platform. They cannot say, “Okay, we have two applications running on the mainframe and all these silos, but we can’t bring in more data sources.” They obviously have to facilitate that.

But from a tooling standpoint, the data scientist should be able to really – the tools have to be so easy that they can say, “If I want to know this about customer X,” and if I ask, “Just pull all data available on this customer,” that could be information coming from the warehouse, from the CRM application, from the transactional system with the latest set of transactions that the customer has had in the last day, month, whatever. And if there’s a way to say, “Okay, what is the last interaction that we had with this customer?” Maybe the customer called in, maybe he went into our Web site and did some online stuff. It could just be, random pieces of unrelated information about our customer, or it could be aggregated around the customer. But they should be able to look at, visualize these things in the tool. Because you can imagine that, just random text about this customer also has to be presented properly, so that based on the questions that are being asked, maybe things have to be highlighted, annotated so it’s visually pretty clear to the data scientist how the person exploring this data, that they don’t miss out on some important aspect of it.

Next page: Making data bigger and more consumable…

Page:  1   2  Next  »

Source: IBM VP Anjul Bhambhri: Must Big Data Alter the Enterprise?

YOYOYOOYOYOYO