Monday, June 28, 2010

No German Suicide Bombers over Danzig, East Prussia, etc.

Nor any mention about the infamous 1995 'Operation Storm' forced expulsion of some 300,000 Serbs from their ancestral homes in Krijina from this article from the publication founded by SMOM William F. Buckley

From the National Review
http://article.nationalreview.com/436085/helen-thomas-turkey-and-the-liberation-of-israel/victor-davis-hanson

excerpt:

Anti-Semitism as displayed by both [Hearst reporter, Helen] Thomas and Turkey’s leaders is not predicated on criticizing Israel, much less disagreeing with its foreign policy. Instead, it hinges upon focusing singularly on Israeli behavior, and applying a standard to it that is never extended to any other nation.

There are plenty of disputes over borders and land in the world. But to Helen Thomas or the Turkish government, Kashmir or the Russian-Chinese border matters little — although the chances of escalation to nuclear confrontation are far greater there than on the West Bank. Has Thomas ever popped off, “Why don’t those Chinese just get the hell out of Tibet?” or “Why don’t those Indians just get out of Kashmir?”

The Palestinian “refugees” — a majority of whom are the children, grandchildren, or great-grandchildren of people actually displaced in 1948 — compose a small part of the world’s refugee population. There are millions of refugees in Rwanda, the Congo, and Darfur. Well over a half-million Jews were ethnically cleansed from the major Arab capitals between 1947 and 1973, each wave of expulsion cresting after a particular Mideast war. Again, few care to demonstrate for the plight of any of these people. Prime Minister Erdogan has not led any global effort to relocate the starving millions in Darfur, despite his loud concern for “refugees” in Gaza. The United States gives far more millions of dollars in aid to the Palestinians than does their Muslim protector in Turkey, who saves cash in winning Palestinian support by practicing anti-Semitism on the cheap. Nor have I heard of any German suicide bomber blowing himself up over lost ancestral land in Danzig or East Prussia, although that land was lost about the same time as some Palestinians left Israel. Few worry that in 1949 tens of thousands of Japanese were forcibly expelled by the Soviet Union from Sakhalin Island.

The world likewise cares little for the concept of “occupation” in the abstract; it is only the concrete example of Palestine that earns its opprobrium. We can be assured that President Obama will not bring up Ossetia with President Putin. He will not raise the question of Tibet with the Chinese or occupied Cyprus with Prime Minister Erdogan. Will Helen Thomas ever ask, “How can Turkey be allowed to keep Nicosia a divided city?” Will she worry whether Greeks are allowed to buy property in the Turkish sector of that capital?

There is no European outcry over the slaughter of South Koreans in a torpedo attack by a North Korean vessel. I don’t recall President Sarkozy weighing in on that particular moral issue. The United Nations is angrier at Israel for enforcing a blockade against its terrorist neighbor than it is at Somalia for allowing pirates to kill and rob right off its coast. There was not much of a global outcry when Iran hijacked a British naval vessel; few in Turkey demonstrated when the French blew up a Greenpeace protest vessel.

“Disproportionate” is a term used to condemn Israeli retaliation. It does not apply to other, far more violent reprisals, such as the Russian leveling of Grozny, or the Turkish killing of Kurds, or occasional Hindu mass rioting and murdering of Muslims in India. Does Prime Minister Erdogan wish to allow “peace activists” to interview Kurds detained in his prisons, or to adjudicate the status of Kurds, Armenians, or Christian religious figures who live in Turkey? Can we imagine a peace flotilla of Swedish and British leftists sailing to Cyprus to “liberate” Greek land or investigate the “disappearance” of thousands of Greeks in 1974? And if they did, what would happen to them? About the same as would happen if they blocked a road to interdict a Turkish armored column rolling into Kurdistan.

Nor do human-rights violations mean much any more. Iran executes more of its own citizens each year than Israel has killed Palestinians in the course of war in any given year. Syria murders whomever it pleases in Lebanon without worry that any international body will ever condemn its action. I have heard a great deal about the “massacre” or “slaughter” at Jenin, where 52 Palestinians and 23 Israelis died. Indeed, the 2002 propaganda film Jenin, Jenin was a big hit on college campuses. But I have never seen a documentary Hama, Hama commemorating the real 1982 slaughter of somewhere between 10,000 and 40,000 civilians by the criminal Assad regime in Syria, with which we now so eagerly wish to restore ties. I find a 1,000-to-1 fatality rule generally applies: Each person killed by the Israel Defense Forces warrants about as much international attention as 1,000 people killed by Africans, Russians, Indians, Chinese, or Arabs.

I used to think that oil, Arab demography, fear of Islamic terrorism, and blowback from its close association with the United States explained the global double standard that is applied to Israel.

But after the hysteria over the Gaza flotilla, the outbursts of various members of the Turkish government, and Ms. Thomas’s candid revelations, I think the mad-dog hatred of Israel is more or less because it is a Jewish state. Period.

Let me explain. Intellectuals used to loudly condemn anti-Semitism because it was largely associated with those deemed to be less sophisticated people, often right-wing, who on either racial, nationalistic, or religious grounds regarded Jews as undesirable. Hating Jews was a sign of boorish chauvinism, or of the conspiratorial mind that exuded envy and jealousy of the more successful.

But in the last two decades especially, the Left has made anti-Semitism respectable in intellectual circles. The fascistic nature of various Palestinian liberation groups was forgotten, as the “occupied” Palestinians grafted their cause onto that of American blacks, Mexican-Americans, and Asian-Americans. Slurring post-Holocaust Jews was still infra dig, but damning the nation-state of Israel as imperialistic and oppressive was considered principled. No one ever cared to ask: Why Israel and not other, far more egregious examples? In other words, one could now focus inordinately on the Jews by emphasizing that one’s criticism was predicated on cosmic issues of human rights and justice. And by defaming Israel the nation, one could vent one’s dislike of Jews without being stuck with the traditional boorish label of anti-Semite.

So an anti-Semitic bigot like Helen Thomas could navigate perfectly well among the top echelons of Washington society spouting off her hatred of Israel, since her animus was supposedly against Israeli policies rather than those who made them. Only an inadvertent remark finally caught up with her to reveal that what she felt was not anger growing out of a territorial dispute, but furor about the nature of an entire people who should be deported to the sites of the Holocaust.

Finally, as I say, all this may have a strangely liberating effect on Israel. We know now that whatever it does, the world, or at least its prominent political and media figures, is going to damn it. Its longtime patron, the United States, now sees not much difference between Israel’s democratic achievement and the autocracies around it, which we are now either subsidizing or courting. As a result, the global censors have lost leverage with Israel, since they have proven to be such laughable adjudicators of right and wrong when Israel is involved.

Israelis should assume by now that whether they act tentatively or strongly, the negative reaction will be the same. Therefore why not project the image of a strong, unapologetic country to a world that has completely lost its moral bearings, and is more likely to respect Israel’s strength than its past concern for meeting an impossible global standard?

How odd that the more the activists, political leaders, and media figures issue moral strictures against Israel, the more they prove abjectly amoral. And the more they seek to pressure Israel, the more they are liberating it to do what it feels it must.

— NRO contributor Victor Davis Hanson is a senior fellow at the Hoover Institution, the editor of Makers of Ancient Strategy: From the Persian Wars to the Fall of Rome, and the author of The Father of Us All: War and History, Ancient and Modern.

Sunday, June 27, 2010

Canberra 'Back and to the Left'

blatantly masonic architecture hidden in plain sight
Also see- http://continuingcounterreformation.blogspot.com/2010/11/freemasonry-lets-america-down.html
---
Letter from Troy Space

MASONIC/ILLUMINIST PYRAMID BUILDINGS & RELATED OCCULT ARCHITECTURE:

Check the design of the street layout near the Australian government HQ in Canberra - does it look like the Illuminati "pyramid without a capstone" symbolism from the US "Great Seal" reverse that also feature son the US $1 bill? For those with eyes to see it does indeed:





$1 U.S. bill, back side to the left

Having a closer look at the Australian Parliament Building on "Capital Hill", do we see any occult design at work?:

Indeed it appears to be an abstracted occultic Baphomet/Goat of Mendez, with horns & lower wing edges discernable, as well as the telltale tri-pronged centre horn & flame given a geometric abstracted representation in the architectural layout. There is even a subtle refelection in the Parliament Building grounds of the lower portions of the snakes forming an elliptical shape with the wand in the middle from that same Baphomet drawing of Eliphas Levi.

And why the apparently functionless Pyramid frame on top of the Australian Parliament building?:



Or the glass Pyramid beneath it?


I had come across similar shots of Canberra before, but not the ones of the glass pyramid that is underneath the big metal (presumably steel) framed pyramid above the government buildings.

Thursday, June 24, 2010

Defining 'Common Sense'






excerpted from

http://thefutureofthings.com/column/1001/dont-burn-the-cat.html

The Elusive “Common Sense”

I hope these two “case studies” managed to demonstrate the magnitude and complexity of what we sometimes call “common sense” and of the knowledge required in order to function well even in the simplest everyday situations.

“Common sense” is notoriously hard to define, but intuitively it implies the knowledge – much of it implicit – that we expect just about all members of society to have. In the information kiosk example, this does not include the detailed knowledge of store and cinema locations, but it does include the knowledge that we want to go to shops only when they are open; that we're willing to accept airports as “nearby” if they're 20 miles away, but cinemas have to be 2 miles away to qualify; etc.

Is there an isolated part of our “common sense” which is all that's required for the information kiosk? No. The knowledge that we can only buy at an open shop is relevant to many aspects of our daily life. The knowledge that we're willing to travel longer distances to reach an airport is actually derived from the fact that there are not many airports situated in urban centers; that there are fewer airports than newsstands; and that the time spent going to the airport is typically a start of a longer trip. We could try to list all of these facts for the sole use of our information kiosk, but it's a large task. It would be much better to share the effort of creating this knowledge with other kinds of software.

This was quite evident back when Hogan was writing The Two Faces of Tomorrow.

In the real world,the best-known attempt to create such a universal set of “common sense” knowledge is the Cyc ( Cyc is a registered trademark of Cycorp) Project. Cyc - (from "encyclopedia", pronounced like psych) - was started in 1984 by Doug Lenat, a prominent artificial intelligence researcher and one of the original Fellows of the Association for the Advancement of Artificial Intelligence (AAAI). Cyc has been in continuous development since then, first as a project of Microelectronics and computer Technology Corporation and, since 1994, by Cycorp, Inc. – a company devoted to Cyc and run by Lenat.

Cyc has the ambitious goal of codifying our shared real-world knowledge into a form that can be used by software. Estimates for the number of knowledge items required for this vary, but Cyc usually states several million items would be required. To clarify, these items do not include all that is known to humanity. For example, if there are nearly two million named species of animals known to biologists, and if we associate just a few facts with each, we're way past the “several million” budget. However, the “common sense" underlying this knowledge may be described quite differently and compactly. First, we need to define species, at least using an everyday understanding which does not have to conform to the strictest scientific understanding. What did even early human societies know about species? First, only two animals of the same species can have offspring. Second, the offspring will also be of the same species. Third, members of the same species are typically similar to each other.

[Side note: this coding of information into software-usable context and related contexts has many parallels to the ideas of the “Semantic Web”. Since this is out of the scope of this column, let me just state that these parallels are not a coincidence. However, Cyc’s vision preceded the semantic net, and is much more ambitious, in that it goes beyond understanding what a web page is about, and also aims to use this understanding, together with its common knowledge, in order to derive new conclusions and understandings. In recent years, there has also been some collaboration between Cyc and semantic-network efforts.]

Now, at least a few readers are objecting to the above informal definition of species: What about asexual reproduction and cloning, where you only need one parent? What about mules, which are offspring of parents from different species? What about sexual dimorphism (think of peacocks and peahens, or about the fish species whose males are tiny and permanently attached to the much larger females)? This is where you really need to be careful when defining the knowledge items, and this example should give you some idea of how hard it is to carry out effective “knowledge engineering”. Yet, the real test is not in absolute accuracy: every generalization will have exceptions. The test is in being able to use this common sense to make everyday deductions which are generally dependable, and in being able to capture important exceptions – sometimes in the general pool of “common sense” and sometimes in specific specialized knowledge pools.

These specialized knowledge pools are Cyc's way of going beyond common sense into codification of “expert knowledge”. In the example of knowledge about biological species, it makes sense to have some facts about mammals in the general knowledge pool (e.g. “female mammals lactate to feed their young”; “cows are mammals”), whereas the scientific definition of the class Mammalia mammals and their taxonomic categorization into subclasses, orders etc. would be part of an expert knowledge module. A key part of Cyc design is the interaction between distributed “Cyc agents”. Every Cyc agent is endowed with some specialized knowledge, and communicates with the other agents using a shared “common sense” pool –pretty similar to the structure of human information society.

Now comes the next step: tapping into “shallow” information sources. By “shallow”, I mean sources that have not been codified as hierarchical knowledge. These could be lists and tables of data, such as location and opening times of stores, or geopolitical information. They could also be the Internet itself, using today's search engines with the Cyc knowledge pool guiding the framing of the search question and the interpretation of the web pages that are found. Thus, asking whether two politicians from different states met during 2005 would first trigger a search for their names plus terms such as “meeting”, “summit” etc., as well as the requested date. Web pages that are retrieved by this search are scanned to see whether appropriate sentences indeed appear in them. If there isn’t evidence for such a meeting, Cyc would generate text strings to determine where each politician was during 2005. If it finds a date when both politicians were in the same city, Cyc could use its knowledge regarding the roles and relationships of the politicians to determine whether it is likely that a meeting had been set for that date. Cyc also detects contradictions between different web pages, as well as contradictions between its own knowledge pool and whatever it finds in its searches, so that it can assign “levels of confidence” to the answers it produces.

Once the coding of general knowledge and specialized knowledge has been completed and linked into “flat” information sources, many applications become possible: information kiosks that understand what you're looking for without forcing you to formulize your questions to match the computer's limitations; advice for pet owners that does not blithely suggest harming the pets; dependable home robots; and – possibly the one application at the top of every knowledge
worker's wish list – a human-like search engine.

What's Wrong With Search?

Doug Lenat (Credit: Cycorp

Today's search engines are awesome. They have access to so much information, and sift through it in milliseconds to answer any query we can think of. The problem, of course, is that again we teach ourselves how to query the search engine and how to interpret the results. Some of this involves our admission that some things just can't be found by using a search engine. Last year, Doug Lenat gave a lecture called “Computers versus Common Sense“ at Google, heavily criticizing the state of the art in web search.

Google's Research Blog selected it as one of its “Videos of the Year” picks for 2006. In this lecture, Lenat gave examples of questions that must be broken into several searches – e.g. “is the Eiffel Tower taller than the Space Needle?”, where you must look up each height separately, find the number within the web page that comes up, and compare the numbers. Even tougher for a search engine is the trivial question “what color is a blue car?”.

From a commercial point of view, there may not be much value in a search engine that can answer the two questions above – the first only requires us to spend a minute or two more than we wish, and the second question is too simple to require a computer. Yet, these examples serve to show a much deeper difficulty. Imagine you're doing market research on what colors of cars are preferred by people living in a certain location or matching some demographic criteria.

Wouldn’t you want the search engine to know that “blue car” relates to car color, while “big car” relates to its size, unless it appears as part of the phrase “big car sale” etc.?

So does it all come down to the issue of “Natural Language Understanding” – the effort to get a computer to understand free-form text in English or any other language? Yes and no. Yes – because you can't understand natural language without some common-sense knowledge about the world (compare “John was baking” to “the apple pie was baking”). No – because common-sense knowledge is required for so much else besides the understanding of natural language, as the next example shows.

One commercial application that Cyc identified years ago is the search for photographs. Creators of text used in reporting, marketing or many other applications often need to supplement the text with some appropriate images. But how do you find images that fit the spirit and theme of your text? The best answer today is to attach to each photograph a short description and/or a list of keywords that describe it, which allows standard text search to pull up relevant images. This depends on the skills of the person describing the photograph as well as those of the person searching for photographs. Cyc suggests another way: If you say what the picture is showing, many contexts will be obvious by common sense. Example: A search for “someone smiling” could discover a photograph titled “a man helping his daughter take her first step”.

How does Cyc do it? It relies on combining several items known to it: when you become happy, you smile; you become happy when someone you love accomplishes a milestone; taking one's first step is a milestone; parents love their children; daughters are children; if a man has a daughter than he is her parent. While some natural-language understanding is involved in this process, the real strength of Cyc is in bringing together these items in a logical sequence that concludes it is highly likely that the man in the photograph is indeed smiling.

The State of Cyc Today

cyc knowledge pyramid (Credit: Cycorp)

Cyc has been around since 1984. It may be the world’s most ambitious and longest-lasting project. In fact, it was conceived in exactly this way: Leading researchers, such as Marvin Minsky, who were sympathetic to Doug Lenat’s ideas, warned that it would take a thousand person-years to get all the required knowledge into a computer. Typical AI academic projects usually have about five people working on them at a time, so the expected completion date was two centuries away. This drove Lenat to turn to the commercial world, where he expected that fifty people could complete the same task in just two decades. After ten years as part of MCC, Cyc was spun off into Cycorp, which is the focus of Cyc work today. Much of its activities are funded by government and private investors, though does sell software, knowledge in expertise for some commercial applications. Cycorp contributes some of its research as open source (OpenCyc) and a larger subset to the academic community

(ResearchCyc).

What does Cyc know today? In the overview given by Cyc, the top-level characterization is of “intangible things”, including events and ideas, and “individual”, including objects and events (yes, events are both individual and intangible). Other high-level categories include “space” and “time”, dealing with things about which you can ask “where?” or “when?”; and “agents”, dealing with things having desires and intentions as well as the ability to work towards their goals.

Deeper down, we find knowledge about weather, chemistry, natural and political geography, mechanical and electric devices, professions and occupations, and dozens of other categories. Each of these includes specific facts as well as more general concepts: for example, knowledge under “political geography” contains both information about specific towns and cities, and existence and implications of borders.

It is hard to find consistent statements regarding how many “assertions” (facts and knowledge items) Cyc has today, but there are definitely millions. Similarly, it is hard to find an estimate of how many more assertions are required before the project is comlpeted or how much longer this will take. We can ignore the fact that the originally-estimated two decades ended a few years back, The “1,000 person-years” forecast was never more than a very rough estimate. It seems like we’re still at the phase where it is difficult to predict when – or if – Cyc will be ready to deliver on its ambitious promises.

It does seem reasonable to expect that when this does happen, it will be sudden. Cyc will be so useful that it will be used in more and more contexts, and this will add size and momentum to the snowball as it receives – or learns for itself – more and more knowledge. When will this tipping point come? As is normal with tipping points, it’s very hard to tell until the tipping has already happened.

To me, it is more interesting to view Cyc as a process that is continually gathering new insights, as well as delivering some applications, which, while falling far below the full vision, are useful in themselves. For example, Lenat mentions in his Google lecture that Cyc has been dragged “kicking and screaming” into adding “higher-order logic”. This mathematical term has to do, among other things, with relationships between relationships, such as “many people don’t know that dolphins are mammals”: “dolphins are mammals” defines a relationship between dolphins and mammals; “many people don’t know that …” defines a relationship between people and the first relationship. In daily life we use much more complex knowledge of this kind. The fact that Cyc had to do this indicates a deep character of the knowledge we all have. Isn’t much of our everyday thinking concerned not with facts but with the effect of these facts on other facts and on the people who know – or don’t know – these facts?

Criticisms of Cyc

A.I. - symbolism v.s. connectionism
(Credit: University of Wisconsin)

If you've followed the path from answering pet-care questions to understanding interpersonal relationships (expecting the father in the picture to smile), you might feel that if a computer can really do all that, it has achieved human intelligence. Furthermore, you might also get the impression that nothing below human-level intelligence would actually suffice to do a good-enough job, unless the domains of discussion are sharply circumscribed (so that a limited amount of knowledge is enough). Lenat would agree, as Cyc's home page says “Cycorp's vision is to create the world's first true artificial intelligence, having both common sense and the ability to reason with it”.

There’s no question that this goal has not been reached yet. Lenat and his co-workers believe that the goal is achievable, and that they are near the point where the computer itself could increasingly take over many of the tasks of teaching itself how to understand and reason.

Not everyone agrees – in fact, large parts of the Artificial Intelligence community are deeply skeptical about Cyc’s goals, methods and technology. While there are many kinds of criticisms, I believe that the disagreement originates in a deep-rooted and old controversy in AI – symbol-based artificial intelligence versus connectionist approaches. Some trace this schism back to the core of philosophy, where symbolism supports the philosophy of Descartes while connectionism follows Heidegger’s critique of these ideas.

At the risk of over-simplification, let’s describe symbolism as the effort to describe every aspect of mental activity as dealing with symbols and the relationship between symbols. In Cyc, for example, “parents love their children” and “daughters are children” are statements linking four symbols (parents, children, daughters, love). Cognition, in this view, is a process of manipulating these symbols using a set of rules, as when the above statements may yield the conclusion that parents love their daughters. This is often called, especially by opponents, “GOFAI”, for “Good Old Fashioned Artificial Intelligence”. Connectionism, on the other hand, starts not from symbolic descriptions that strive to model the real world, but from the real world itself. Cognition is then the reaction of interconnected units (such as neurons) to inputs from the real world, where the brain continually adjusts the connections between these units to achieve responses which are a better fit for the real world. For example, a better response could be one that made a better prediction of the next event detected by the senses.

Critics of Cyc typically use the same arguments used by connectionists against OFAI: can a symbolic description really capture the complexity and “messiness” of the real world? How do you deal with exceptions? Birds can generally fly, but what about flightless birds, dead birds, birds whose wings have been clipped, caged birds, and parrots in Monty Python sketches? Can we make a comprehensive list of all the exceptions to this rule? How about birds that can only fly short distances? In the statements “airplanes can fly” and “birds can fly”, should “fly” be represented as the same symbol or as two separate symbols?

Another aspect of the “messiness” of the real world is the many shades of meaning for just about any concept. Cyc currently holds about twenty semantically-distinct meanings of inclusion (“A is part of B”). Why not five, or fifty? How can what we know about one meaning of inclusion be used for another meaning of inclusion – and should it be used or would it generate wrong conclusions? When can we deduce that a specific parent does not love his or her children? What actions can we predict from the fact that X loves Y? Does it even make sense to identify “love” with a symbol with an agreed-upon meaning?

Can robots have common sense?A subtler set of issues revolves around how all this knowledge may be usable, even if it is correctly represented. As Herbert Dreyfus, one of the chief critics of GOFAI, says:

"Nowhere in the Encyclopedia Britannica does it say that people move forward more easily than they move backward". In case this seems frivolous, consider our reaction when we see someone walking backwards. The key point is, we’d notice something odd, and this would cause us to look for an explanation. Among many possible explanations, we may suspect that the person is walking away from some danger and decide to look in the direction where he’s looking. It could save our lives. It’s important to notice that knowing the fact mentioned by Dreyfus is not enough. Even in the limited arena of observations about movements of people, we should also state, for similar reasons, that people prefer walking to crawling, that they typically keep their hands hanging at the sides of their bodies, etc. Each of these facts could be used to predict “normal” movements and to detect the need for more explanations. “Why is this woman raising her hand while walking? Is she waving to anybody? Let’s look at the direction she’s waving” – in order to start this chain of thought, we need to remember that people usually don’t just happen to raise their hands while walking. It is even questionable whether we know how to state all the relevant facts about how people move. We can intuitively differentiate human walk from the walk of even the best-walking robots available today (one observer commented that Honda’s Asimo robot walks like a person who really needs to go to the bathroom). Can we explain to Cyc how we make this identification so quickly?

How would Cyc’s developers think of entering such a fact (“people move forward more easily than they move backward”) into their list of common-sense items? Remember that for this fact to be usable in reasoning we should also state some fact such as “when people have several ways to do something, they typically choose the easier way”. If Cyc developers don’t add these facts, could Cyc derive it from other items within its knowledge pool or from other information sources? As the above example shows, the lack of such facts, which are so obvious to us, could result in Cyc failing to make even apparently simple deductions. Such failures are well-known to users of Cyc and other similar, less-ambitious projects, many of whom believe that once some critical mass of knowledge has been achieved, the gaps will be automatically detected and filled in (possibly by Cyc asking humans to provide the missing pieces).

How would a connectionist approach teach a computer that people move forwards rather than backwards? It would let the computer teach itself, by observing the movements of many people in many situations (many AI researchers would say that there’s also a critical need here for the computer itself to “walk” – that is, to be embodied in a robot).

Let’s take a quick and simplified tour through the connectionist world:

Imagine that there’s a unit, within the large set of interconnected units, which has come (through earlier learning) to be strongly active when forward movement is observed; another unit associated with backward movement; and yet another one with human walk. I emphasize that the tagging of a unit – e.g. as identifying “human walk” – is only of interest for external investigation of these units:

the operation of the connected network of units does not need, understand or use this tagging. Over many observations, the computer will find that “human walk” is almost always active together with “forward movement”, and rarely active with “backward movement”. If this rare combination occurs, it will trigger other parts of the network to look for other experiences matching this combination. In other words, attention will be drawn to something after it is discovered that it is an unexpected combination. If no earlier observations can be retrieved, the observations may be decomposed into their constituent details.

For example, the head’s direction may be ignored as a useless additional detail when the observed person is both moving and looking forwards – it is added into the “moving forward” activation. However, when attention has been focused by the unexpected pattern of unit activation, units which are activated by direction of gaze would receive a stronger signal. Eventually, this could yield the kind of reaction we’re expecting. I say “could yield” because I don’t know of any connectionist project that has demonstrated such success within real-world, unbounded situations like the ones Cyc is targeting.

There are also criticisms based on mathematical theories of logic.

Mechanisms for handling exceptions, as well as for handling the higher-order logic discussed above, involve types of mathematical logic for which there are theoretical limitations regarding completeness and consistency.

The “completeness” problem implies that there could be facts which are deducible from Cyc’s knowledge, but which Cyc would never discover (this is not the same as the simpler problem of completeness, which questions whether Cyc would ever have enough facts in order to have reliable “common sense”). The “consistency” problem means that it is theoretically possible that Cyc would be able to use parts of its knowledge to decide that some claim is true, while other parts of its knowledge lead it to deduce that the same claim is false. The only known way to prove that this would never happen is to drastically limit the rate and type of knowledge creation as well as limiting knowledge content – an unacceptable solution. Cyc’s developers took the middle road: They decided to allow contradictions between different bodies of knowledge, each of which is dedicated to one kind of “expertise”, while striving towards internal consistency in each such body. Regarding theoretical objections, Cyc counters that it is an engineering project which should be judged by empirical evaluation of results.

Furthermore, if we’re trying to create human-level intelligence, shouldn’t we allow for some incompleteness and inconsistency – especially if there might be reason to believe that these are the costs of achieving real

Machine intelligence?

Lastly, GOFAI simply doesn’t “feel right” for many people. It does not feel like what we’re doing when we’re thinking, and unlike connectionism it has very little biological support.

Most Cyc supporters and critics would at least agree about one thing: If a computer accomplishes the goals of properly using common sense in a real-world setting, regardless of whether it was achieved using symbolism, connectionism or something else, it will have become human in many ways. Shortly thereafter, it will become superhuman, if only in its capability to process and use far more information than any human or any group of humans ever could.

Amazingly, the protagonists of The Two Faces of Tomorrow, written in 1979, who started with the deceptively simple question of how to get a computer to understand why cats shouldn’t be burned, face exactly this possibility by the end of the book. Only time will tell whether this is a coincidence or a hint of future developments.For further discussion of A.I. and common sense see the TFOT forums.


About the author:

Israel Beniaminy has a Bachelor’s degree in Physics and Computer Science, and a Graduate degree in Computer Science. He develops advanced optimization techniques at ClickSoftware technologies, and has published academic papers on numerical analysis, approximation algorithms and artificial intelligence, as well as articles on fault isolation, service management and optimization in industry magazines.


---

Is not this all about how the printing press eventually was used to shape opinions?

With the internet breaking the virtual monopoly of centralized mass media resulting from decades of infiltration and ownership considation, ala SMOM William Randolf Hearst, technology such as this being developed by Doug Lenat/Cycorp would be a logical tool for effectuating this sort of censorship through the internet.

IOW would not it be invaluable for thwarting the spread of ideas considered dangerious by the ruling ancient regime?




Jesuitical Drug War



Published May 4, 2008 in my companion blog Freedom of Medicine and Diet

http://freedomofmedicineanddiet.blogspot.com/2008/05/jesuitical-drug-policy-treason_04.html

From my unpublished manuscript "Coca Forgotten Medicine"

A Foreign Military Order's Un-Constitutional Rule & Treason to Humanity through the Jesuit Order Run Georgetown University, the U.S. State Department and the National Institute [for promoting] Drug Abuse

The allegiance to the drug control status quo was prevalent from the onset throughout the Clinton Administration’s bureaucracies, as has been the case with either the Democrat or Republican Parties throughout the 1900s. I found it typified by two Washington, D.C. meetings held on successive days in July 1993, some 6 months into the then new Clinton Administration, with the official presentations and responses at the Q&A sessions indicating how truth was beholden to this false ideology.

The first of these events was a plenary panel organized by the U.S. State Department and the like-wise Jesuit Order run Georgetown University Center for Strategic and International Studies. The second one was a workshop panel held during the multi-day conference by the National Institute of Drug Abuse (N.I.D.A.).

The State Department-Georgetown University event was titled "Multilateralism and Drugs", and was held July 15, 1993 at the Rayburn U.S. House of Representatives Office Building. Its speakers included Yale University’s David Musto, and Timothy E. Wirth, U.S. President Clinton’s appointee as Undersecretary of State for Global Affairs.



At this event, Musto admitted a fact that he would leave out of his Readers' Digest style articles: that dilute cocaine alkaloid containing Coca beverages such as Vin Mariani, were not problematic.



Wirth, previously a U.S. Representative and later a Senator from Colorado, was described as the Clinton Administration’s point man on everything from refugees to global warming, and described in glowing terms by newspapers such as The Washington Post, as a “reformer”, was there to give a speech titled “New Approaches to Global Drug Problems”. So I figured that he would be a good person to ask about the Coca issue, which I did at that panel’s Q and A period:


Douglas Willinger- Question: Sir, you mentioned crop eradication -- Coca eradication -- as the solution but have you consider the alternative. Instead of crop substitution, why not cocaine conversion- that is, remove the things creating this bad situation, having coca tea in supermarkets rather than crack in the streets. Indeed, sir, what about Bolivia's recent proposal to review the effects of Coca, and adjusting the laws accordingly if need be?

Timothy Wirth- Answer: Did I talk about crop eradication? Do I have to answer that? You sound like one of those Hemp people! Next question.
The NIDA event, a workshop panel Update on Drugs- Cocaine and Stimulants, was held at the second day of the U.S. N.I.D.A. National Conference on Drug Abuse Research and Practice; An Alliance for the 21st Century. It featured a number of speakers, including a Dr. Millwood, all testifying to the dangers of cocaine. One of the presenters spoke about brain damage with combined cocaine and alcohol use, including its metabolization of cocaethyline. All in all, while their prognosis upon cocaine was negative, I heard little to no mention of actual doses. From memory, as NIDA is uncooperative with public requests for such transcripts:


Douglas Willinger- Question: Question of dosage and paradigm. You speak of the dangers of cocaine and alcohol, but could you please elaborate as to the dosages? What stimulant would not be harmful in such does? Is cocaine itself anymore toxic as asides from the concentrated forms as developed under prohibition.

Dr. Millwood- Answer: About 1 ¼ gallon of vodka and an eight ball of pharmaceutical grade cocaine hci snorted in one night. Well, uh yes, we just don't think it's that important, let's change the subject, okay?!

In other words, cocaine was a highly toxic-addictive drug in doses dangerous with any stimulant. They had no showing that it was anymore toxic then other naturally occurring stimulants caffeine and nicotine, in like contexts. Cocaine, Caffeine and Nicotine are all alkaloids that serve as CNS stimulants found in minute amounts in such plants as Coca, Coffee and Tobacco.

As licit drugs, they are taken through the use of the parent substance, or in isolated form in a mode of delivery of a pharmaceutical preparation, e.g. caffeine as No Doz or Vivarin tablets largely consisting of mannitol, and more recently various nicotine chewing gums and patches.

As an illicit drug under a prohibition of Coca leaves and cocaine, with penalties based upon the contraband’s gross weight, the amount of cocaine use is significantly reduced, but the amount of cocaine taken per dose may be higher, with the concentration definitely far higher in doses of the type more akin to produce the big bang favored by these drug law’s economics, with prohibition driving up concentration and prices hence promoting the justification for the price- e.g. spending $50 or $100 for a chewing gum package sized foil of white powder is more justified by the pronounced effects of doses that are more dangerous in every way: larger more concentrated and more direct.

And this is popularly supported as somehow fighting drug abuse!


Such a drug policy regimen that suppresses a safer substance for the sake of a one intrinsically dangerous, that is the adulterated, misbranded cigarettes undeniably benefited by this criminal mercantilism, underwent no visible criticism within the room where Mr. Wirth brushed aside my question. This was also so in the room where Dr. Millwood would answer my question about the amount of cocaine needed to produce the toxicities he discussed, but not answer my second question about the silence in the drug abuse research industry over maintains the drug control policy that stops Coca while promoting concentrated cocaine. One could only imagine if either Wirth and Millwood, nor the U.S. State Department and the U.S. Department of Health and Human Services N.I.D.A. had even given any thought about whether such anti-Coca leaf policies were reconcilable with any stance respecting either the public's health, let alone the human rights of the millions of Andeans who consider Coca eradication an infringement upon their individual and their cultural rights, in this day and age of moral certainty. I suppose they simply care less for that then appeasing their masters that gave them their jobs.
Georgetown University's Jesuit Order's Underappreciated 20th Century Role


Rome's Anti-Christ Coca-ine Prohibition
http://continuingcounterreformation.blogspot.com/2008/07/roman-catholic-church-cocaine.html


Drug Warriors Disregard Pharmacokinetics
http://freedomofmedicineanddiet.blogspot.com/2008/03/drug-warriors-ignore-pharmacokinetics.html


Drug War Promotes Drug Abuse
http://freedomofmedicineanddiet.blogspot.com/2008/03/drug-war-promotes-drug-abuse-over-drug.html


Drug War Criminal Mercantilsm to Protect Cigarette Industry
http://freedomofmedicineanddiet.blogspot.com/2008/03/it-was-criminal-mercantilism-to-protect.html


Drug War Criminal Mercantilism Public Health Subversion
http://freedomofmedicineanddiet.blogspot.com/2008/03/criminal-mercantilism-public-health.html


Drug War Criminal Mercantilism
http://freedomofmedicineanddiet.blogspot.com/search/label/criminal%20mercantilism

Tuesday, June 22, 2010

Ontology: Lenat's Approach Versus Those That Would Let 'Many Flowers Bloom'


From a paper,Integration and Beyond: Linking Information from Disparate Sources and into Workflow, presented in part as the keynote to the Cornerstone on Integrating Information, one of four Cornerstone sessions included in the program of the AMIA Annual Fall Symposium, Washington, DC, November 6–10, 1999, and published in the Journal of the American Medical Informatics Association Volume 7 Number 2 Mar / Apr 2000 135

Abstract

The vision of integrating information—from a variety of sources, into the way people work, to improve decisions and process—is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of ‘‘integration’’ projects. First generation projects create a database and use it for multiple purposes. Second generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction.

---

Member of the audience:

I’d like to ask a question about capturing ontologies from multiple people. Imagine for a moment that knowledge freezes long enough for us to try to catch it. Do you have a vision of a tool that will allow multiple knowledge-domain people to act at once? To work out discrepancies in their visions?

Mark Musen: Put differently, the question was how do we deal with the fact that there is no overarching ontology? How do we build the tools that will allow us to try to achieve consensus in ontologies? I think the answer to that question is that we do not know. I’m being a little bit facetious, but philosophers have been trying to deal with that problem for 2,000 to 3,000 years.

I think you see two different approaches in the computer science community. You see the approach that Doug Lenat has taken. He is trying to create an ontology that he believes will provide all the knowledge that one needs to read the Encyclopaedia Britannica. Such an overarching ontology would need to capture most of human existence. The real problem, though, is how you ever validate the distinctions made in that ontology and have confidence that things have been captured in a way that is consistent and understandable? How do you record all the assumptions that you make while constructing the ontology? When you have concepts like ‘‘semi-tangible object’’ and ‘‘semiintangible object,’’ it’s very hard to know for sure whether what one records about those distinctions really makes sense.

At the other end of the spectrum, you see people who really want a thousand flowers to bloom and who are not trying to achieve that kind of perfect alignment among views of the world. For example, the Knowledge Systems Laboratory at Stanford is trying to make constrained ontologies that deal with very narrow domains, so that the kinds of problems that you allude to do not happen, because the number of concepts in the ontology is relatively small. The answer lies somewhere between Doug Lenat’s view of the world, that all we have to do is work hard enough and everything will fall into place, and the view that we can’t possibly do this, so we have to have just a small number of constrained ontologies. We need to elucidate a set of principles that will provide the basis for tools that will help us try to, if not merge small ontologies, at least create the kinds of alignments that will allow us to bring them together in ways that make them useful.

Randy Miller: One of the things that I learned from my mentor, Jack Myers, is that as an informatician, as opposed to a philosopher or a computer scientist, you do not need to represent everything. If you have a problem at hand, you represent it at a level that is tractable and doable. If you do what Doug Lenat’s doing, you can spend your entire career representing stuff that is not ever going to be used in a real system, because there is no way to apply it. While that may sound harsh, the reality is that we do not know how to represent time, severity of finding, and severity of illness well at all, but we can still build systems that do diagnosis or a good job of making recommendations for therapy. So you do not have to capture the world in all its infinite detail. The trick is to understand what the critical information is and represent things at that level. Otherwise, you get mired in detail.

Mark Musen: Let me underscore your last point. Doug Lenat actually felt pretty confident that his ontology covered all the areas that one would want to deal with, until last year, when HotBot contracted to use CYC as the basis for indexing Web pages. This contract showed, first of all, that ontologies have incredible commercial potential, but it also pointed out to Doug Lenat that there was a whole realm of human experience that was not well represented in the ontology. Specifically, there was a need to categorize different kinds of pornography which Lenat had not thought about previously.

Member of the audience:

Health Level Seven’s development of a set of reference information models is one of the major efforts for creating a structure for ontologies in the United States. Can you talk about how your organizations are participating in the development of that reference information model (RIM) and how you are using your academic experiences to contribute to that effort among providers, academics, and vendors?

Bill Stead: Vanderbilt is an institutional member and a strong advocate of HL7. The central core of our communication subsystem uses HL7, and we build middle ware as needed to bridge between the core and legacy products. We have not put direct energy into the process for defining the reference information model. We use the HL7 model as a starting point, but we extend it as needed. In this way we incorporate it into immediate solutions to real problems, while providing useful information about future directions.

Bill Hersh: None of us has been involved directly in that effort. However, our research into the nature of ontologies and the vocabulary projects such as the Cannon Grouping should useful to the effort.

Mark Musen: I will just add that I think the vendor community is in the best position to work on ontology content, because they have the most direct connection with the needs of end users. I think that academicians need to follow this work very carefully. We are, we hope, in the best position to be developing the kinds of tools that will help us examine ontologies, relate them to each other, and allow them to evolve as our understanding of the world changes.

Randy Miller: I have a slightly contrary view, partly out of ignorance about HL7 RIM. The key question is what problems it is trying to solve. That should drive what the content is. If you can state the problems it is going to be used to solve, then you can say whether it should clinically rich. In that case it will require lots of input from academic clinicians. If it is to solve the problem of interchange of data among vendors then it needs vendor input. But until you explicitly state what it’s going to be used for, just building it for the sake of building it is not useful. I know that the HL7 RIM is not being built that way. I am just saying that I think that’s the way to address your question, to seek the specific purpose before giving an answer.

Perceiving Reality


to experiance yet not nessarily perceive the continuing counter reformation

Ontology

http://en.wikipedia.org/wiki/Ontology

Ontology (from the Greek ὄν, genitive ὄντος: of being (neuter participle of εἶναι: to be) and -λογία, -logia: science, study, theory) is the philosophical study of the nature of being, existence or reality in general, as well as the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences.

Overview

Ontology concerns determining whether some categories of being are fundamental, and asks in what sense the items in those categories can be said to "be". It is the inquiry into being in so much as it is being, or into beings insofar as they exist–and not insofar as, for instance, particular facts obtained about them or particular properties related to them.

For Aristotle there are four different ontological dimensions: i) according to the various categories or ways of addressing a being as such, ii) according to its truth or falsity (e.g. fake gold, counterfeit money), iii) whether it exists in and of itself or simply 'comes along' by accident, and iv) according to its potency, movement (energy) or finished presence (Metaphysics Book Theta).

Some philosophers, notably of the Platonic school, contend that all nouns (including abstract nouns) refer to existent entities. Other philosophers contend that nouns do not always name entities, but that some provide a kind of shorthand for reference to a collection of either objects or events. In this latter view, mind, instead of referring to an entity, refers to a collection of mental events experienced by a person; society refers to a collection of persons with some shared characteristics, and geometry refers to a collection of a specific kind of intellectual activity.[1] Between these poles of realism and nominalism, there are also a variety of other positions; but any ontology must give an account of which words refer to entities, which do not, why, and what categories result. When one applies this process to nouns such as electrons, energy, contract, happiness, space, time, truth, causality, and God, ontology becomes fundamental to many branches of philosophy.

Principal questions of ontology are "What can be said to exist?", "Into what categories, if any, can we sort existing things?", "What are the meanings of being?", "What are the various modes of being of entities?". Various philosophers have provided different answers to these questions.

One common approach is to divide the extant entities into groups called categories. Of course, such lists of categories differ widely from one another, and it is through the co-ordination of different categorical schemes that ontology relates to such fields as library science and artificial intelligence. Such an understanding of ontological categories, however, is merely taxonomic, classificatory. The categories are, properly speaking,[citation needed] the ways in which a being can be addressed simply as a being, such as what it is (its 'whatness', quidditas or essence), how it is (its 'howness' or qualitativeness), how much it is (quantitativeness), where it is, its relatedness to other beings, etc.

Some Fundamental Questions

Further examples of ontological questions include:

What is existence, i.e. what does it mean for a being to be?

Is existence a property?

Is existence a genus or general class that is simply divided up by specific differences?

Which entities, if any, are fundamental? Are all entities objects?

How do the properties of an object relate to the object itself?

What features are the essential, as opposed to merely accidental, attributes of a given object?

How many levels of existence or ontological levels are there? And what constitutes a 'level'?

What is a physical object?

Can one give an account of what it means to say that a physical object exists?

Can one give an account of what it means to say that a non-physical entity exists?

What constitutes the identity of an object?

When does an object go out of existence, as opposed to merely changing?

Do beings exist other than in the modes of objectivity and subjectivity, i.e. is the subject/object split of modern philosophy inevitable?

Concepts

Quintessential ontological concepts include:

Universals and Particulars
Substance and Accident
Abstract and Concrete objects
Essence and Existence
Determinism and Indeterminism

History of ontology

Etymology

While the etymology is Greek, the oldest extant record of the word itself is the Latin form ontologia, which appeared in 1606, in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).

The first occurrence in English of "ontology" as recorded by the OED (Oxford English Dictionary, second edition, 1989) appears in Bailey’s dictionary of 1721, which defines ontology as ‘an Account of being in the Abstract’ - though, of course, such an entry indicates the term was already in use at the time. It is likely the word was first used in its Latin form by philosophers based on the Latin roots, which themselves are based on the Greek. The current on-line edition of the OED (Draft Revision September 2008) gives as first occurrence in English a work by Gideon Harvey (1636/7-1702): Archelogia philosophica nova; or, New principles of Philosophy. Containing Philosophy in general, Metaphysicks or Ontology, Dynamilogy or a Discourse of Power, Religio Philosophi or Natural Theology, Physicks or Natural philosophy - London, Thomson, 1663.

Origins
Parmenides and Monism

Parmenides was among the first to propose an ontological characterization of the fundamental nature of reality. In his prologue or proem he describes two views of reality; initially that change is impossible, and therefore existence is eternal. Consequently our opinions about reality must often be false and deceitful. Most of western philosophy, and science - including the fundamental concepts of falsifiability and the conservation of energy - have emerged from this view. This posits that existence is what exists, and that there is nothing that does not exist. Hence, there can be neither void nor vacuum; and true reality can neither come into being nor vanish from existence. Rather, the entirety of creation is limitless, eternal, uniform, and immutable. Parmenides thus posits that change, as perceived in everyday experience, is illusory. Everything that we can apprehend is but one part of a single entity. This idea somewhat anticipates the modern concept of an ultimate grand unification theory that finally explains all of reality in terms of one inter-related sub-atomic reality which applies to everything.[citation needed]

Ontological pluralism

This section may require cleanup to meet Wikipedia's quality standards. Please improve this section if you can. (February 2009)

The opposite of eleatic monism is the pluralistic conception of Being. In the 5th century BC, Anaxagoras & Leucippus replaced [2] the reality of Being (unique and unchanging) with the Becoming and therefore by a more fundamental and elementary ontic plurality. This thesis originated in the Greek-ion world, stated in two different ways by Anaxagoras and by Leucippus. The first theory dealt with "seeds" (which Aristotle referred to as "homeomeries") of the various substances. The second was the atomistic theory,[3] which dealt with reality as based on the vacuum, the atoms and their intrinsic movement in it.

The materialist Atomism proposed by Leucippus was indeterminist, but then developed by Democritus in deterministic way. It was later (4th century BC) that the originary atomism was taken again as indeterministic by Epicurus. He confirmed the reality as composed of an infinity of indivisible, inchangeable corpuscles or atoms (atomon, lit. ‘uncuttable’), but he gives weight to characterize atoms while for Leucippus they are characterized by a "figure", an "order" and a "position" in the cosmos (Aristotle, Metaphysics, I , 4, 985). They are, besides, creating the whole with the intrinsic movement in the vacuum, producing the diverse flux of being. Their movement is influenced by the Parenklisis (Lucretius names it Clinamen) and that is determinated by the chance. These ideas foreshadowed our understanding of traditional physics until the nature of atoms was discovered in the 20th century..[citation needed]

Plato

Plato developed this distinction between true reality and illusion, in arguing that what is real are eternal and unchanging Forms or Ideas (a precursor to universals), of which things experienced in sensation are at best merely copies, and real only in so far as they copy (‘participate in’) such Forms. In general, Plato presumes that all nouns (e.g., ‘Beauty’) refer to real entities, whether sensible bodies or insensible Forms. Hence, in The Sophist Plato argues that Being is a Form in which all existent things participate and which they have in common (though it is unclear whether ‘Being’ is intended in the sense of existence, copula, or identity); and argues, against Parmenides, that Forms must exist not only of Being, but also of Negation and of non-Being (or Difference).

Aristotle

Ontology as an explicit discipline was inaugurated by Aristotle, in his Metaphysics, as the study of that which is common to all things which exist, and of the categorisation of the diverse senses in which things can and do exist. What exists, in so far as Aristotle concludes, are a plurality of independently existing substances – roughly, physical objects – on which the existence of other things, such as qualities or relations, may depend; and of which substances consist both of a form (e.g. a shape, pattern, or organisation), and of a matter formed (Hylomorphism). Disagreeing with Plato, who taught that frameworks or Forms have an existence of their own, Aristotle holds that universals do not have an existence over and above the particular things which instantiate them.-

Other ontological topics

Ontological and epistemological certainty

René Descartes, with "cogito ergo sum" or "I think, therefore I am", argued that "the self" is something that we can know exists with epistemological certainty. Descartes argued further that this knowledge could lead to a proof of the certainty of the existence of God, using the ontological argument that had been formulated first by Anselm of Canterbury.

Certainty about the existence of "the self" and "the other", however, came under increasing criticism in the 20th century. Sociological theorists, most notably George Herbert Mead and Erving Goffman, saw the Cartesian Other as a "Generalized Other", the imaginary audience that individuals use when thinking about the self. According to Mead, "we do not assume there is a self to begin with. Self is not presupposed as a stuff out of which the world arises. Rather the self arises in the world" [4][5] The Cartesian Other was also used by Sigmund Freud, who saw the superego as an abstract regulatory force, and Émile Durkheim who viewed this as a psychologically manifested entity which represented God in society at large.

Body and environment, questioning the meaning of being

Schools of subjectivism, objectivism and relativism existed at various times in the 20th century, and the postmodernists and body philosophers tried to reframe all these questions in terms of bodies taking some specific action in an environment. This relied to a great degree on insights derived from scientific research into animals taking instinctive action in natural and artificial settings — as studied by biology, ecology, and cognitive science.

The processes by which bodies related to environments became of great concern, and the idea of being itself became difficult to really define. What did people mean when they said "A is B", "A must be B", "A was B"...? Some linguists advocated dropping the verb "to be" from the English language, leaving "E Prime", supposedly less prone to bad abstractions. Others, mostly philosophers, tried to dig into the word and its usage. Heidegger distinguished human being as existence from the being of things in the world. Heidegger proposes that our way of being human and the way the world is for us are cast historically through a fundamental ontological questioning. These fundamental ontological categories provide the basis for communication in an age: a horizon of unspoken and seemingly unquestionable background meanings, such as human beings understood unquestioningly as subjects and other entities understood unquestioningly as objects. Because these basic ontological meanings both generate and are regenerated in everyday interactions, the locus of our way of being in an historical epoch is the communicative event of language in use[4]. For Heidegger, however, communication in the first place is not among human beings, but language itself shapes up in response to questioning (the inexhaustible meaning of) being.[6] Even the focus of traditional ontology on the 'whatness' or 'quidditas' of beings in their substantial, standing presence can be shifted to pose the question of the 'whoness' of human being itself.[7]

Prominent ontologists

Aristotle
Aquinas
Mario Bunge
Franz Brentano
Gilles Deleuze
Hans-Georg Gadamer
Nicolai Hartmann
Martin Heidegger
Heraclitus
Georg Wilhelm Friedrich Hegel
Edmund Husserl
Immanuel Kant
Gottfried Leibniz
Leucippus
Alexius Meinong
Parmenides
Plato
Plotinus
Proclus
W. V. O. Quine
Bertrand Russell
Gilbert Ryle
Jean-Paul Sartre
Baruch Spinoza
Alfred North Whitehead
Ludwig Wittgenstein

Monday, June 21, 2010

Roman Catholicism Can Thwart The Internet Spread of 'Heresies'



Message from Avles:

http://wwwfreespeechbeneathushs.blogspot.com/2006/11/stephen-devoy_26.html#comments

http://cyc.com/cyc/company/news/OpenCyc%20Brings%20Meaning%20to%20the%20Web

--------------------------------------
OpenCyc is a wide-ranging and increasingly comprehensive ontology that describes things and events in the world in logical terms that computers can reason about. Its purpose is to provide a shared vocabulary for Web applications, allowing them to automatically reason about, and integrate, the content of web sites and web services. The OpenCyc ontology and knowledge base goes beyond tag-sets, taxonomies, and other reference vocabularies, because it has been designed and extensively tested for use in automated reasoning. As Andraž Tori, CTO of Zemanta Ltd. sees it,


"Common semantic vocabularies are the missing link for the semantic web. Blogs cover an incredible range of subjects, so meaning-based content integration using the huge OpenCyc ontology can provide an amazing user experience for bloggers and other content authors."

(ontology:http://en.wikipedia.org/wiki/Ontology_%28information_science%29)
--------------------------------------

This is the way of the routine control, how they controls the search engines in order you avoid to find an object or you to discover what they want.

Thursday, June 17, 2010
INTERNET IS A N.A.T.O. MILITARY BATTLEFIELD!
http://avlesbeluskesexposed.blogspot.com/2010/06/internet-is-nato-military-battlefield.html

(I hope they don't manipulate this message!)

6/17/2010 4:41 AM

About the President and CEO of Cycorp developing this technology, Douglas Lenat:

http://cyc.com/cyc/company/lenat



Doug is one of the world's leading computer scientists, and is both the founder of the CYC® project and the president of Cycorp. He has been a Professor of Computer Science at Carnegie-Mellon University and Stanford University. He is a prolific author, whose hundreds of publications include the books


Knowledge Based Systems in Artificial Intelligence (1982, McGraw-Hill)
Building Expert Systems (1983, Addison-Wesley)
Knowledge Representation (1988, Addison-Wesley)
Building Large Knowledge Based Systems (1989, Addison-Wesley)

His 1976 Stanford thesis earned him the bi-annual IJCAI Computers and
Thought Award in 1977.

He was one of the original Fellows of the AAAI (American Association for
Artificial Intelligence).


From Wikipedia:

Douglas B. Lenat (born in 1950) is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence, especially machine learning (with his AM and Eurisko programs), knowledge representation, blackboard systems, and "ontological engineering" (with his Cyc program at MCC and at Cycorp). He has also worked in military simulations and published a critique of conventional random-mutation Darwinism[citation needed] based on his experience with Eurisko. Lenat was one of the original Fellows of the AAAI.

Lenat's quest, in the Cyc project, to build the basis of a general artificial intelligence by manually representing knowledge in the formal language, CycL, based on extensions to first-order predicate calculus has not been without its critics, among them many members of the MIT hacker culture. It is perhaps for this reason that "bogosity" is jokingly said to be measured in microlenats according to the Jargon File, the lenat being considered too large for practical use.

At the University of Pennsylvania, Lenat received his Bachelor's degree in Mathematics and Physics, and his Master's degree in Applied Mathematics in 1972. He received his Ph.D. from Stanford University (published in Knowledge-based systems in artificial intelligence, along with the Ph.D. thesis of Randall Davis, McGraw-Hill, 1982) in 1976. His advisor was Professor Edward Feigenbaum.

In 1976 Lenat started teaching at Carnegie-Mellon and commenced his work on Eurisko, but returned to Stanford in a teaching role in 1978. His continuing work on Eurisko led to attention in 1982 from DARPA and MCC in Austin, Texas. In 1984 he left Stanford to commence work on Cyc[1], the fruits of which were spun out of MCC into Cycorp in 1994. In 1986, he estimated the effort to complete Cyc would be 250,000 rules and 350 man-years of effort[2].

As of 2006, Lenat continues his work on Cyc at Cycorp. He is also a member of TTI/Vanguard's advisory board.

Quotes

"Intelligence is ten million rules."

"The time may come when a greatly expanded Cyc will underlie countless software applications. But reaching that goal could easily take another two decades." [3]

"Once you have a truly massive amount of information integrated as knowledge, then the human-software system will be superhuman, in the same sense that mankind with writing is superhuman compared to mankind before writing."

References
Lenat, Douglas. "Hal's Legacy: 2001's Computer as Dream
and Reality. From 2001 to 2001: Common Sense and the Mind of HAL". Cycorp, Inc..
http://www.cyc.com/cyc/technology/halslegacy.html. Retrieved 2006-09-26.

The Editors of Time-Life Books (1986). Understanding Computers: Artificial
Intelligence. Amsterdam: Time-Life Books. pp. 84. ISBN 0-7054-0915-5.

TechnologyReview.com (March 2005)
The Editors of Time-Life Books (1986).
Understanding Computers: Artificial Intelligence. Amsterdam: Time-Life Books.
pp. 81–84. ISBN 0-7054-0915-5.
"Beyond the Semantic Web" video lecture at NIPS 2008.
"How David Beats Goliath" article at The New Yorker.
Retrieved from http://en.wikipedia.org/wiki/Douglas_Lenat


Categories:
Fellows of the Association for the Advancement of Artificial Intelligence
Fellows of the American Association for the Advancement of Science 1950 births
Living people Artificial intelligence researchers

About some of his work from CNN article dated February 12, 2003

Doug Lenat, president of Cycorp, says his researchers have built the beginnings of a system to identify calling patterns between suspected terrorists.

Financed by more than $20 million in government contracts, researchers are taking the first steps toward developing a system that could sift through the financial, telephone, travel and medical records of millions of people in hopes of identifying terrorists before they strike. So far, the companies awarded contracts by the Defense Department are using only fabricated data in their work on the program, which is called Total Information Awareness.

The Pentagon's technology chief, Pete Aldridge, has said the department is interested in tying together such privately held data as credit card records, bank transactions, car rental receipts and gun purchases, along with massive quantities of intelligence information already gathered by the federal government.

The project has met some resistance in Congress because of privacy concerns. Some lawmakers are pushing an amendment to a spending bill that would prohibit the system from ever gathering information on American citizens without a congressional vote approving it.

Meanwhile, contractors and researchers told The Associated Press that they have already been developing pieces of TIA.

For example, Doug Lenat, president of Texas-based Cycorp, said his researchers had already built a system to identify phone-calling patterns as they might exist among potential terrorists overseas.

Other TIA contractors include defense giant Raytheon and Telcordia, an elecommunications company specializing in research and development. Several
other companies have been waiting to finalize deals.

So far, contractors have worked with fake data, things like made-up telephone numbers and receipts that look like real consumer records, but aren't, according to interviews and public records.

Aldridge outlined the program in a news conference in November after questions arose about the choice of John Poindexter to head TIA.

The former admiral and national security adviser to President Reagan has been a lightning rod.

A figure in the Iran-Contra scandal, he was convicted on charges of lying to Congress, destroying official documents and obstructing a congressional investigation. The verdicts were overturned on appeal.

From the start, the idea of TIA has proven controversial, pitting national security worries against fears the government would run roughshod over individual privacy.

"We're talking about the most expansive, far reaching surveillance program ever proposed. The Congress has got to take a stand here," said Sen. Ron Wyden, D-Oregon, who has led efforts to restrict TIA.

Pentagon officials declined repeated interview requests by AP for this story. After coming under earlier Senate criticism, the Defense Department named a TIA oversight panel and issued a news release denying it is building a gigantic database.

However, a document that was part of the department's bid solicitation for the TIA said "the term 'database' is intended to convey a new kind of extremely large, omni-media, virtually-centralized and semantically rich information repository."

Peter Higgins, a consultant and former CIA chief information officer, said what officials wanted from TIA was a system that would use relevant private and government compiled information to spot patterns or convergences.

For example, a government-collected list of every person treated for anthrax exposure could help find people plotting a biological attack. Even more useful: finding people on that list who also telephone Afghanistan.

Electronic records are already ubiquitous in corporate America. Businesses keep lists of cardiac patients, BMW owners, subscribers to porn magazines, even people who tend to do their grocery shopping about the time they receive sales circulars, Higgins said.

Privacy laws governing the disclosure of personal electronic data vary widely, depending on the type of data.

The Fair Credit Reporting Act, for example, forbids credit bureaus from combining the data they collect about a customer's on-time payment history with data the bureaus sell to direct marketers. The Federal Election Commission allows the Republican and Democratic parties to sell lists of people who contribute.

The Pentagon began advertising for bids to work on TIA last March, inviting ideas to exploit "novel" information sources and new electronic research methods.

Overseeing the research is the Defense Advanced Research Project Agency, or DARPA, the same office that developed the Internet. According to the published solicitation, DARPA planned a five-year timeline for TIA: three to develop ideas and demonstrations, two to build and expand on the most promising ones.

The TIA budget is $30 million from the current and past fiscal years.

In all, 26 bids were received, said DARPA spokeswoman Jan Walker. Four companies were awarded contracts. According to the TIA Web site, many other organizations were already working on pieces Poindexter planned to connect to TIA.

The companies included:

• Cycorp, based in Austin, Texas, which was awarded $9.8 million to work on a prototype database. The company specializes in searching data.


• Telcordia, based in Morristown, New Jersey, which won a $5.2 million contract to focus on connecting data already available within different government offices.


• Hicks Associates, of McLean, Virginia, which was awarded $3.6 million to study the feasibility of TIA, how it would develop, and to create a prototype.


• Booz, Allen & Hamilton, based in Falls Church, Virginia, which won a $1.5 million contract. Its purpose was not publicly disclosed.

Raytheon Co., based in Lexington, Massachusetts, which confirmed that it is under contract with DARPA. Spokesman David Shay declined to outline Raytheon's specific role.

Another research firm, RAND Corp., based in Santa Monica, California, confirmed it was expecting to work on TIA. Neither the company nor the Pentagon would provide details.





See:

Stephen Devoy
http://wwwfreespeechbeneathushs.blogspot.com/2006/11/stephen-devoy_26.html


Appendum, June 22, 2010:

Douglas Lenat's name appears within the November 1999 conference

PARTICIPANTS IN THE RAND/NIC INFORMATION REVOLUTION CONFERENCES THE NOVEMBER 1999 CONFERENCE ON SOCIETAL TRENDS DRIVEN BY THE INFORMATION REVOLUTION

Dr. Jon B. Alterman (United States) Middle East Program Officer, United States Institute of Peace Professor Kim V. Andersen (Denmark) Department of Informatics, Copenhagen Business School

Dr. Robert H. Anderson (United States) Senior Information Scientist and Head, Information Sciences Group, RAND

Professor Vallampadugai S. Arunachalam (India) Engineering & Public Policy Department and Robotics Institute, Carnegie Mellon University

Dr. Tora Kay Bikson (United States) Senior Behavioral Scientist, RAND

Mr. Taylor Boas (United States) Carnegie Endowment for International Peace
Professor Paul Bracken (United States) School of Management, Yale University

Mr. Clinton C. Brooks (United States) Corporate Knowledge Strategist, National Security Agency
Professor Eric Brousseau (France) Centre ATOM, Universite de Paris I Pantheon Sorbonne


The Global Course of the Information Revolution

Professor William Caelli (Australia) School of Data Communications, Queensland University of Technology

Mr. Colin Crook (United States) Senior Fellow, Wharton School; Former Senior Technology Officer, Citibank

Dr. James Dewar (United States) Senior Mathematician, RAND

Dr. William Drake (United States) Senior Associate and Director of the Project on the Information Revolution and World Politics, Carnegie Endowment for International Peace
Professor Francis Fukuyama (United States) Institute of Public Policy, George Mason University

Dr. Lawrence K. Gershwin (United States) National Intelligence Officer for Science & Technology, National Intelligence Council

Mr. David C. Gompert (United States) Vice President, National Security Research Division; Director, National Defense Research Institute, RAND

Professor Sy Goodman (United States) University of Arizona, Georgia Tech, and Stanford University

Dr. David Gordon (United States) National Intelligence Officer for Economics and Global Issues, National Intelligence Council

Dr. Jerrold Green (United States) Senior Political Scientist, Director of International Development; Director, Center for Middle East Public Policy, RAND

Dr. Eugene C. Gritton (United States) Director, Acquisition and Technology Policy Program, RAND

Dr. Richard O. Hundley (United States) Senior Physical Scientist, RAND
Appendix: RAND/NIC Conference Participants 149

Dr. Paul Kozemchak (United States) Special Assistant to the Director, Defense Advanced Research Projects Agency

Dr. John Kriese (United States) Chief Scientist, Defense Intelligence Agency

Ms. Ellen Laipson (United States) Vice Chairman, National Intelligence Council

Dr. Martin Libicki (United States) Senior Policy Analyst, RAND

Mr. John Mabberley (United Kingdom) Managing Director, DERAtec, Defence Evaluation and Research Agency

Ms. Yuko Maeda (Japan) Nomura Research Institute America
Professor Mark Mason (United States) School of Foreign Service, Georgetown University

Mr. Hideo Miyashita (Japan) General Manager, Center for Cyber Communities Initiative, Nomura Research Institute Ltd.

Dr. James Mulvenon (United States) Associate Political Scientist, RAND

Dr. C. Richard Neu (United States) Senior Economist and Associate Director, Project Air Force, RAND

Mr. Yoshiyuki Noguchi (Japan) President, Nomura Research Institute America

Dr. William Nolte (United States) Director, Outreach and Strategic Planning, National Intelligence Council
Professor M. J. Norton (United Kingdom) Head of Electronic Business, Institute of Directors

The Global Course of the Information Revolution

Mr. Ian Pearson (United Kingdom) Futurologist, British Telecommunications Laboratories
Professor Larry Press (United States) Chairman, CIS Department, California State University at Dominguez Hills

Ms. Betsy Quint-Moran (United States) Strategic Assessments Group, Office of Transnational Issues, Central Intelligence Agency

Dr. Enid Schoettle (United States) Special Advisor to the Chairman, National Intelligence Council
Dr. Brian Shaw (United States) Deputy National Intelligence Officer for Science & Technology, National Intelligence Council
Professor Ernest Wilson (United States) Director, Center for International Development and Conflict Management, University of Maryland at College Park

Mr. Robert Worden (United States) Federal Research Division, Library of Congress

Ms. Lily Wu (United States) Former Director, Equity Research, Salomon Smith Barney, Hong Kong and San Francisco; Currently Acting CFO, Disappearing Inc. and MovieQ.com

Mr. Boris Zhikharevich (Russia) Head, Strategic Planning Department, Leontief Centre, St. Petersburg

THE MAY 2000 CONFERENCE ON THE TECHNOLOGY DRIVERS OF THE INFORMATION REVOLUTION

Dr. Robert H. Anderson (United States) Senior Information Scientist and Head, Information Sciences Group, RAND

Dr. Philip Antón (United States) Senior Computer Scientist, RAND
Appendix: RAND/NIC Conference Participants 151
Professor Vallampadugai S. Arunachalam (India) Engineering & Public Policy Department and Robotics Institute, Carnegie Mellon University

Dr. Steven Bankes (United States) Senior Computer Scientist, RAND

Mr. John Baskin (United States) Deputy National Intelligence Officer for Economics and Global Issues, National Intelligence Council

Mr. Jeffrey Benjamin (United States) Senior Associate, Booz Allen Hamilton

Dr. Tora Kay Bikson (United States) Senior Behavioral Scientist, RAND

Dr. Joel Birnbaum (United States) Chief Scientist, Hewlett-Packard Company

Mr. Maarten Botterman (The Netherlands) Research Leader, RAND Europe
Professor William J. Caelli (Australia) School of Data Communications, Faculty of Information Technology, Queensland University of Technology, Australia

Dr. Jonathan Caulkins (United States) Director, Pittsburgh Office, RAND

Mr. Colin Crook (United States) Senior Fellow, Wharton School; Former Senior Technology Officer, Citibank

Professor Peter Denning (United States) Computer Science Department, George Mason University

Dr. James Dewar (United States) Senior Mathematician and Director, Research Quality Assurance, RAND

The Global Course of the Information Revolution

Dr. David Farber (United States) Chief Technologist, Federal Communications Commission; Professor, University of Pennsylvania

Dr. Robert Frederking (United States) Chair, Graduate Programs in Language Technology, Carnegie Mellon University

Professor Erol Gelenbe (United States) Associate Dean of Engineering & Computer Science University of Central Florida

Dr. Lawrence K. Gershwin (United States) National Intelligence Officer for Science & Technology National Intelligence Council

Dr. Eugene C. Gritton (United States) Director, Acquisition and Technology Policy Program, RAND

Mr. Eric Harslem (United States) Senior Vice President of Products and Technology Strategy, Dell Computer Corporation

Mr. Stanley Heady (United States) Executive for Research Alliances, National Security Agency

Dr. Charles M. Herzfeld (United States) Independent Consultant

Dr. Richard O. Hundley (United States) Senior Physical Scientist, RAND

Mr. James M. Kearns (United States) Financial Design Inc.

Dr. Paul Kozemchak (United States) Special Assistant, Intelligence Liaison, Defense Advanced Research Projects Agency

Dr. John T. Kriese (United States) Chief Scientist, Defense Intelligence Agency

Dr. Douglas Lenat (United States) President, CYCORP

Appendix: RAND/NIC Conference Participants

Mr. David Marvit (United States) Director, Strategy, Disappearing Inc.
Professor Noel MacDonald (United States) Department of Mechanical & Environmental Engineering, University of California at Santa Barbara

Dr. William Mularie (United States) Director, Information Systems Office, Defense Advanced Research Projects Agency

Dr. C. Richard Neu (United States) Senior Economist, RAND

Dr. Edward C. Oliver (United States) Director, Advanced Scientific Computing Research, Department of Energy

Professor Raj Reddy (United States) Herbert A. Simon University Professor, Carnegie Mellon University

Professor William L. Scherlis (United States) School of Computer Science, Carnegie Mellon University

Dr. Enid Schoettle (United States) Special Advisor to the Chairman, National Intelligence Council
Dr. Brian Shaw (United States) Deputy National Intelligence Officer for Science & Technology, National Intelligence Council

Professor Mary Shaw (United States) School of Computer Science, Carnegie Mellon University

Professor Robert Simon (United States) Department of Computer Science, George Mason University

Dr. Stephen L. Squires (United States) Special Assistant for Information Technology, Defense Advanced Research Projects Agency

Mr. Phillip Webb (United Kingdom) Chief Information Officer,

The Global Course of the Information Revolution

Defence Evaluation and Research Agency, Farnborough,United Kingdom

Ms. Lily Wu (United States) Chief Financial Officer, XLinux Inc.

Mr. Rick E. Yannuzzi (United States) Senior Deputy National Intelligence Officer for Strategic and Nuclear Programs, National Intelligence Council

THE NOVEMBER 2000 CONFERENCE ON THE COURSE OF THE INFORMATION REVOLUTION IN LATIN AMERICA

Dr. Robert H. Anderson (United States) Senior Information Scientist and Head, Information Sciences Group, RAND

Mr. Fulton T. Armstrong (United States) National Intelligence Officer for Latin America, National Intelligence Council

Mr. Diego Arria (Venezuela) Chairman, Technology Holdings International; Former Permanent Representative of Venezuela at the United Nations

Dr. John Baskin (United States) Deputy National Intelligence Officer for Economics and Global Issues, National Intelligence Council

Dr. Tora Kay Bikson (United States) Senior Behavioral Scientist, RAND
Professor Antonio Jose Junqueira Botelho (Brazil) Department of Politics and Sociology, Pontificial Catholic University of Rio de Janeiro

Mr. Juan Enriquez (Mexico) Researcher, David Rockefeller Center for Latin American Studies, Harvard University;

Appendix: RAND/NIC Conference Participants

Former CEO of Mexico City’s Urban Development Corporation; Coordinator General of Economic Policy and Chief of Staff to Mexico’s Secretary of State

Dr. Lawrence K. Gershwin (United States) National Intelligence Officer for Science & Technology, National Intelligence Council

Dr. David Gordon (United States) National Intelligence Officer for Economics and Global Issues, National Intelligence Council

Dr. Eugene C. Gritton (United States) Director, Acquisition and Technology Policy Program, RAND

Dr. Timothy Heyman (Mexico) President, Heyman y Asociados, S.C., Mexico City, Former President, ING Baring Grupo Financiero (Mexico)

Dr. Richard O. Hundley (United States) Senior Physical Scientist, RAND

Mr. Elliot Maxwell (United States) Special Advisor to the Secretary of Commerce for the Digital Economy, U.S. Department of Commerce Ms. Lee Mizell (United States) Doctoral Fellow, RAND Graduate School

Mr. William T. Ortman (United States) Deputy National Intelligence Officer for Latin America, National Intelligence Council

Mr. Jonathan Orszag (United States) Managing Director, Sebago Associates Inc.; Former Assistant to the Secretary of Commerce; Director of the Office of Policy and Strategic Planning, Department of Commerce

Mr. Ricardo Peon (Mexico) Manager of Telecoms and Internet Investments,

The Global Course of the Information Revolution

Heyman y Asociados, S.C., Mexico City;Former Managing Director, Deutsche Bank Mexico

Mr. Danilo Piaggesi (Italy) Head, Information Technologies for Development Division, Inter-American Development Bank

Professor Larry Press (United States) Chairman, CIS Department, California State University at Dominguez Hills

Dr. Susan Kaufman Purcell (United States) Vice President, The Council of the Americas, New York City

Dr. Angel Rabasa (United States) Senior Policy Analyst, RAND

Mr. David Rothkopf (United States) Chairman and Chief Executive, Intellibridge Corporation; Former Acting Under Secretary of Commerce for International Trade; Deputy Under Secretary of Commerce for International Trade Policy Development

Mr. Ricardo Setti (Brazil) Brazilian Journalist; Latin American Business Consultant

Dr. Brian Shaw (United States) Deputy National Intelligence Officer for Science & Technology, National Intelligence Council

Mr. Eduardo Talero (United States) Principal Informatics Specialist and Informatics Procurement Advisor, World Bank

Dr. Gregory Treverton (United States) Senior Consultant, RAND; Senior Fellow, Pacific Council on International Policy

Ms. Regina K. Vargo (United States) Deputy Assistant Secretary of Commerce for the

Appendix: RAND/NIC Conference Participants

Western Hemisphere,

U.S. Department of Commerce Mr. Robert A. Vitro (United States) Intersectoral, Regional and Special Programs, Information Technology for Development Division, Inter-American Development Bank

Professor Ernest Wilson (United States) Director, Center for International Development and Conflict Management, University of Maryland at College Park

Mr. Robert Worden (United States) Federal Research Division, Library of Congress

THE APRIL 2001 CONFERENCE ON THE COURSE OF THE INFORMATION REVOLUTION IN EUROPE

Dr. Robert H. Anderson (United States) Senior Information Scientist and Head, Information Sciences Group, RAND

Mr. Neil Bailey (United Kingdom) Managing Director, Empower Dynamics

Dr. Tora Kay Bikson (United States) Senior Behavioral Scientist, RAND

Dr. Carl Bildt (Sweden) Special United Nations Envoy for the Balkans; Former Prime Minister of Sweden; Member, Advisory Board, RAND Europe

Mr. Daniel Bircher (Switzerland) Head, Information and Process Security, Ernst Basler & Partners Ltd.

Mr. Maarten Botterman (The Netherlands) Program Director, Information and Communications Technology Policy Research, RAND Europe

158 The Global Course of the Information Revolution

Mr. J. C. Burgelman (Belgium) SMIT-VUB

Dr. Gabriella Cattaneo (Italy) Databank Consulting

Dr. Jonathan Cave (United States) Senior Economist, RAND Europe

Mr. Anders Comstedt (Sweden) President, Stokab

Ms. Renée Cordes (Belgium) Freelance Journalist, Brussels

Mr. Ian Culpin (Belgium) Martech International, Brussels

Ms. Carine Dartiguepeyrou (France) Consultant, RAND Europe Formerly of Solving International, Paris

Ms. Kitty de Bruin (The Netherlands) Director, NT FORUM

Mr. Pol Descamps (Belgium) Consultant, PTD Partners

Mr. Job Dittberner (United States) National Intelligence Council

Mr. Bob Ford (United Kingdom) Senior Research and Development Manager British Telecommunications

Dr. Lawrence K. Gershwin (United States) National Intelligence Officer for Science & Technology, National Intelligence Council

Dr. Eugene C. Gritton (United States) Director, Acquisition and Technology Policy Program, RAND

Mr. Kurt Haering (Switzerland) Director, Foundation InfoSurance, Zurich
Appendix: RAND/NIC Conference Participants 159

Dr. Kris Halvorsen (Norway) Center Director, Solutions and Services Technologies, Hewlett Packard Laboratories

Professor Dr. Bernhard M. Hämmerli (Switzerland) Professor of Informatics, Communications and Security, Applied University of Technology, Lucerne
Dr. Andrej Heinke (Germany) DaimlerChrysler

Dr. Richard O. Hundley (United States) Senior Physical Scientist and Manager, Information Revolution Project, RAND

Col. Eng. Aurelian Ionescu (Romania) CIO and IT Advisor to State Secretary, Romania Ministry of National Defense, Bucharest

Dr. Suzanne Jantsch (Germany) Project Manager, Information Technology Communications, IABG

Dr. Peter Johnston (United Kingdom) Head of New Methods of Work, Information Society Directorate-General, European Commission
Professor Sergei Kapitza (Russia) Academy of Science, Moscow

Mr. Thomas Koeppel (Switzerland) Section Head, Service for Analysis and Prevention, Swiss Federal Office of Police, Bern

Mr. Ivo Kreiliger (Switzerland) Deputy Intelligence Coordinator, Assessment and Detection Bureau, Bern

Professor Eddie C. Y. Kuo (Singapore) Dean, School of Communication Studies, Nanyang Technological University, Singapore

Mr. David Leevers (United Kingdom) VERS Associates
160 The Global Course of the Information Revolution

Mr. Stephan Libiszewski (Switzerland) Attaché for IT, Swiss Mission to NATO, Brussels

Dr. Erkki Liikanen (Finland) Commissioner, Enterprise and Information Society, European Commission

Professor Arun Mahizhan (Singapore) Deputy Director, Institute of Policy Studies, Singapore

Dr. Joan Majo (Spain) Institut Catalan de Tecnologia

Dr. John McGrath (RN retired) (United Kingdom) Ex Dean, Royal Navy Engineering College, Manadon

Dr. Adrian Mears (United Kingdom) Technical Director Defence Evaluation and Research Agency, Farnborough

Mr. Horace Mitchell (United Kingdom) Founder and CEO, Management Technology Associates

Dr. C. Richard Neu (United States) Senior Economist, RAND

Dr. Michelle Norgate (Switzerland) Center for Security Studies and Conflict Research, Swiss Federal Institute of Technology, Zurich

Sir Michael Palliser (United Kingdom) Chairman, Advisory Board, RAND Europe; Former Vice Chairman, Samuel Montagu & Co., London

Dr. Sarah Pearce (United Kingdom) Parliamentary Office of Science & Technology, London
Mr. Ian Pearson (United Kingdom) Futurologist, British Telecommunications Laboratories

Dr. Robert Pestel (Germany) Senior Scientific Officer, Information Society Directorate-General, European Commission

Appendix: RAND/NIC Conference Participants 161

Prof. Richard Potter (United Kingdom) Defence Evaluation and Research Agency, Farnborough

Dr. Michel Saloff-Coste (France) MSC & Partners, Paris

Mr. Maurice Sanciaume (France) Government Affairs Manager Europe, Agilent Technologies Belgium

Dr. Brian Shaw (United States) Deputy National Intelligence Officer for Science & Technology, National Intelligence Council

Mr. Mark Stead (United Kingdom) Member of the Director General Information Office of the Ministry of Defence

Mr. Eddie Stewart (United Kingdom) DERA Webmaster, Defence Evaluation and Research Agency Professor Reima Suomi (Finland) University of Turku, Finland

Ms. Pamela Taylor (United Kingdom) E-Business Policy Advisor, Confederation of British Industry

Mr. Tom Tesch (Belgium) Technical University of West Flanders, Kortrijk, Belgium

Professor Paul Van Binst (Belgium) Director, Telematics and Communications Services, Free University of Brussels

Mr. Lorenzo Veleri (United Kingdom) Policy Analyst, Kings College, London

Mr. Phillip Webb (United Kingdom) Chief Information Officer and Chief Knowledge Office, Defence Evaluation and Research Agency, Farnborough 162 The Global Course of the Information Revolution

Professor Raoul Weiler (Belgium) University of Louvain

Dr. Walter Widmer (Switzerland) Head, IT Security Switzerland, UBS