What a long way the net has come. I suppose it necessary but gratuitous to add: ‘for better and for worse.’
There’s a moment in this interesting mash-up where the speaker implies the following: could we re-render human brain to think more like a machine? This follows from the difficulty of making a machine think like a human.
I had to look up the use of the term ontologies because I know little about information science, and, the its use in the video seemed to depart from the philosophical term. Here’s the treatment about ontologies at wikipedia.
There is nothing about the problems faced by the varieties of user. I’m a user and I know of the problems I encounter in searching for information, both on the internet, in libraries, and, on my own computer, in my own archive of documents.
I’ll mention three challenges. I’ll frame this by stating that I wish my computer-based archives and library archives were indexed by google.
(1) usually, (my) searches for information on google are satisfied. However, because the results are matched with the real-time indexing my cognition provides for, the end of a search on a given topic–usually in the social sciences–is arbitrarily terminated. In other words, I have conclusive idea that a given result is the optimum result. I’d also characterize my search methods using partly ad hoc heuristics.
(2) searches in my computer-based archive are brute force and leverage Spotlite’s ability to look into the text of every file, BUT, involve scanning through very long result lists, most of which are not positive. As a user, the labor intensive task of organizing files on my end is, ‘too much.’ And, fit to this is the ease with which information can be archived versus the labor involved in organizing it. Somewhat: the intuitive’s curse…
(3) The most difficult search of the web and internet resources are those that are very particular and very local. A good example would be somebody’s address. Searches oriented to topics do not fall into this category.
One other note–I would guess my own search capability falls into the highly capable slice of any Bell Curve. This guess is based in my understanding of how to use the specific editing features of google search. And, it’s based on observing how most other people use search. One of the challenges for the semantic web, given,
The Semantic Web is an evolving development of the World Wide Web in which the meaning (semantics) of information on the web is defined, making it possible for machines to process it.
is any useful, more powerful interface and facilitation, has to meet the different modes of differentiated users.
For example, I wouldn’t be skeptical of a machine’s ability to qualify results so that I could be confident I’ve reached the optimum set of results, but I’d like to know beforehand why I needn’t be skeptical. And, this would have to be presented to me at my level.
This post starts a series aimed to point to a conception of Web 3.0 drawn from the deployment of the so-called semantic web for the purpose of having so-called machines read and interpret the data.
Amongst the inner circle here, it goes without saying that this has already been raised as a concept and direction, and it has been supposed this require text/lexical analytical tools.
For my own part, I assume lots of people and teams are working to build robust analytic tools. Also, it is most interesting to me personally to consider what are the ramifications of Web 3.0 for users who don’t give a whit about what is happening inside these machines; nor care much about the purposes implicit in the human direction prior to (and thus ‘behind,’) machine activities; nor are aware of the long history of efforts to realize effective and efficient data-mining/analysis tools for all sorts of commercial, security, law enforcement, research, purposes.
So what, then, are all the technologies like mashups, XML, Java and the rest, if not 2.0? I actually see them as web 3.0 technologies–not for the casual user or faint of heart. 1.0 was the early web, with its need for knowledge of code and servers; 2.0 is easy entry, democratization, and increased participation; 3.0 is about more complex connections being made.
Web 2.0 is social: many hands make light work. In stark contrast, Web 3.0 is industrial: the automation of tasks displaces human work. But trite definitions won’t prepare us for change. Whatever you call it, our information economy is in the midst of an Industrial Revolution. And if you don’t place the Web within the frame of industrial manufacturing, you won’t see the real disruptive change coming.
This story reads much like the first Industrial Revolution. Artisans and skilled tradesman used to create everything by hand. Then, through the emergence of a handful of technical innovations, came the age of mass production. It was a profound turning point in human history, affecting every aspect of daily life.
Today, most content is still created by hand, the best of it by highly skilled artisans drawing on centuries of scholarship and experience. Recently, we’ve seen significant innovations in social approaches to content creation. But Web 3.0 industrialization takes content manufacturing to an entirely different level. Instead of users manually creating content, machines automate the heavy lifting. Consumers simply push the buttons and get stuff done. Think spinning wheels versus textile mills.
I note in this excerpt the facile leap from content manufacturing to consumers simply push buttons.
The middle man is not expressed. Hmmm. Is Web 3.0 partly about the industrialization of mediation?
Some argue that Web 3.0 will be a leveling force, and proceed to speak of more democratization. Others make wolf-in-sheep-clothing counter arguments. I would tend to wonder how leveling works in the context of the march of capital, and its aims. (But, then, I’ve read too much Ivan Illich.)
By which I mean, my 82y.o. mom has never been able to finally sort out the difference between the web browser, software, and a web site. It’s interesting because she’s been using computers for 30 years, and, was on the internet at home in the early nineties. She’s also a retired college vice president.
Oddly, whatever the ‘proposition’ is and whatever the ‘term’ is, neither have to be absolutely and concretely mediated prior to a user going out and just using their keyboard-mouse-screen to access different experiences provided by arrival at a web location.
If 8% can make this distinction, then I would guess client/server is well beyond the other 92%. So if web 3.0 is to be the semantic web, it’s not going to roll on top of a lot of user comprehension. This is somewhat in the direction of a cleanly carved out instrumentalism, with all the artificial-intelligent infrastructure in the black box, behind the browser so-to-speak.
The user is better off not knowing how “it” works!