Windows and Linux are of course built for one mouse, with single and double click, its been like that for years. Since I've over forty and slower to learn, I'll probably to happier with the mouse plus keyboard for the rest of my obsolete little life, but that doesn't mean its a superior interface. The new generation are already happy with there touchable I phones, and would no doubt like a computer with the interface of an Ipad and the huge amount of software as the existing PC.
Why no common framework for touch screen for the PC yet. Is it all patent and litigation threats, or guest laziness?
The Web also needs an update to work this touch screens, especially with mult-touch and gestures. A touch could be mapped to a button click, but can java-script events match more than one figure on the screen a time. The interface is x and y coordinate for a single mouse event. Event libraries need be written for multi-touch and standardised first for the OS and second for the web browser. Even after that, is not obvious at what stage we decide that combined movements become a gesture. OS layers, application layer e.g. browser, and web layers, might in the worst case, all define and recognise two fingers spinning as different gestures, triggering three event at three different layers. So plenty of work for user interface designers for the next tens years or so.