[Paleopsych] Technology Review: The Internet Is Broken

Premise Checker checker at panix.com
Fri Jan 13 16:53:26 UTC 2006


The Internet Is Broken
http://www.technologyreview.com/infotech/wtr_16051,258,p1.html

[This is confusing. There are three parts to the article. First is part 1, 
which is fine. Then comes part 2, with some feedback that is not in the 
dead three version. Then part 3. But the dead-tree version had additional 
paragraphs. I typed in the URL above, changing it to p4. It consisted of 
yet more comments. Changing it to p5 results in the very same additional 
comments. I follow it all with an short article referenced in the main 
one, "Click, 'Oh Yeah?'"]

Monday, December 19, 2005

The Net's basic flaws cost firms billions, impede innovation, and threaten 
national security. It's time for a clean-slate approach, says MIT's David 
D. Clark.

By David Talbot

In his office within the gleaming-stainless-steel and orange-brick jumble 
of MIT's Stata Center, Internet elder statesman and onetime chief protocol 
architect David D. Clark prints out an old PowerPoint talk. Dated July 
1992, it ranges over technical issues like domain naming and scalability. 
But in one slide, Clark points to the Internet's dark side: its lack of 
built-in security.

In others, he observes that sometimes the worst disasters are caused not 
by sudden events but by slow, incremental processes -- and that humans are 
good at ignoring problems. "Things get worse slowly. People adjust," Clark 
noted in his presentation. "The problem is assigning the correct degree of 
fear to distant elephants."

[[37]Click here to view graphic representations of David D. Clark's four 
goals for a new Internet architecture.]

Today, Clark believes the elephants are upon us. Yes, the Internet has 
wrought wonders: e-commerce has flourished, and e-mail has become a 
ubiquitous means of communication. Almost one billion people now use the 
Internet, and critical industries like banking increasingly rely on it.

At the same time, the Internet's shortcomings have resulted in plunging 
security and a decreased ability to accommodate new technologies. "We are 
at an inflection point, a revolution point," Clark now argues. And he 
delivers a strikingly pessimistic assessment of where the Internet will 
end up without dramatic intervention. "We might just be at the point where 
the utility of the Internet stalls -- and perhaps turns downward."

Indeed, for the average user, the Internet these days all too often 
resembles New York's Times Square in the 1980s. It was exciting and 
vibrant, but you made sure to keep your head down, lest you be offered 
drugs, robbed, or harangued by the insane. Times Square has been cleaned 
up, but the Internet keeps getting worse, both at the user's level, and -- 
in the view of Clark and others -- deep within its architecture.

Over the years, as Internet applications proliferated -- wireless
devices, peer-to-peer file-sharing, telephony -- companies and network
engineers came up with ingenious and expedient patches, plugs, and
workarounds. The result is that the originally simple communications
technology has become a complex and convoluted affair. For all of the
Internet's wonders, it is also difficult to manage and more fragile
with each passing day.

That's why Clark argues that it's time to rethink the Internet's basic
architecture, to potentially start over with a fresh design -- and
equally important, with a plausible strategy for proving the design's
viability, so that it stands a chance of implementation. "It's not as
if there is some killer technology at the protocol or network level
that we somehow failed to include," says Clark. "We need to take all
the technologies we already know and fit them together so that we get
a different overall system. This is not about building a technology
innovation that changes the world but about architecture -- pulling
the pieces together in a different way to achieve high-level
objectives."

Just such an approach is now gaining momentum, spurred on by the
National Science Foundation. NSF managers are working to forge a
five-to-seven-year plan estimated to cost $200 million to $300 million
in research funding to develop clean-slate architectures that provide
security, accommodate new technologies, and are easier to manage.

They also hope to develop an infrastructure that can be used to prove
that the new system is really better than the current one. "If we
succeed in what we are trying to do, this is bigger than anything we,
as a research community, have done in computer science so far," says
Guru Parulkar, an NSF program manager involved with the effort. "In
terms of its mission and vision, it is a very big deal. But now we are
just at the beginning. It has the potential to change the game. It
could take it to the next level in realizing what the Internet could
be that has not been possible because of the challenges and problems."

References

   37. http://www.technologyreview.com/infotech/wtr_16051,258,p1.html

http://www.technologyreview.com/InfoTech/wtr_16051,258,p2.html
Continued from Page 1

By David Talbot

Firewall Nation
When AOL updates its software, the new version bears a number: 7.0,
8.0, 9.0. The most recent version is called AOL 9.0 Security Edition.
These days, improving the utility of the Internet is not so much about
delivering the latest cool application; it's about survival.

In August, IBM released a study reporting that "virus-laden e-mails
and criminal driven security attacks" leapt by 50 percent in the first
half of 2005, with government and the financial-services,
manufacturing, and health-care industries in the crosshairs. In July,
the Pew Internet and American Life Project reported that 43 percent of
U.S. Internet users -- 59 million adults -- reported having spyware or
adware on their computers, thanks merely to visiting websites. (In
many cases, they learned this from the sudden proliferation of error
messages or freeze-ups.) Fully 91 percent had adopted some defensive
behavior -- avoiding certain kinds of websites, say, or not
downloading software. "Go to a neighborhood bar, and people are
talking about firewalls. That was just not true three years ago," says
Susannah Fox, associate director of the Pew project.

Then there is spam. One leading online security company, Symantec,
says that between July 1 and December 31, 2004, spam surged 77 percent
at companies that Symantec monitored. The raw numbers are staggering:
weekly spam totals on average rose from 800 million to more than 1.2
billion messages, and 60 percent of all e-mail was spam, according to
Symantec.

But perhaps most menacing of all are "botnets" -- collections of
computers hijacked by hackers to do remote-control tasks like sending
spam or attacking websites. This kind of wholesale hijacking -- made
more potent by wide adoption of always-on broadband connections -- has
spawned hard-core crime: digital extortion. Hackers are threatening
destructive attacks against companies that don't meet their financial
demands. According to a study by a Carnegie Mellon University
researcher, 17 of 100 companies surveyed had been threatened with such
attacks.

Simply put, the Internet has no inherent security architecture --
nothing to stop viruses or spam or anything else. Protections like
firewalls and antispam software are add-ons, security patches in a
digital arms race.

The President's Information Technology Advisory Committee, a group
stocked with a who's who of infotech CEOs and academic researchers,
says the situation is bad and getting worse. "Today, the threat
clearly is growing," the council wrote in a report issued in early
2005. "Most indicators and studies of the frequency, impact, scope,
and cost of cyber security incidents -- among both organizations and
individuals -- point to continuously increasing levels and varieties
of attacks."

And we haven't even seen a real act of cyberterror, the "digital Pearl
Harbor" memorably predicted by former White House counterterrorism
czar Richard Clarke in 2000 (see "[35]A Tangle of Wires"). Consider
the nation's electrical grid: it relies on continuous network-based
communications between power plants and grid managers to maintain a
balance between production and demand. A well-placed attack could
trigger a costly blackout that would cripple part of the country.

The conclusion of the advisory council's report could not have been
starker: "The IT infrastructure is highly vulnerable to premeditated
attacks with potentially catastrophic effects."

The system functions as well as it does only because of "the
forbearance of the virus authors themselves," says Jonathan Zittrain,
who cofounded the Berkman Center for Internet and Society at Harvard
Law School and holds the Chair in Internet Governance and Regulation
at the University of Oxford. "With one or two additional lines of
code...the viruses could wipe their hosts' hard drives clean or
quietly insinuate false data into spreadsheets or documents. Take any
of the top ten viruses and add a bit of poison to them, and most of
the world wakes up on a Tuesday morning unable to surf the Net -- or
finding much less there if it can."

Discuss

Hogwash by artMonster, 12/19/2005 11:43:06 AM
The internet is not broken, M.S. Windows is. The issue of unwanted
email (spam) warrants some changes
in the underlying structure, but the other problems are really OS
problems, and Windows bears the
brunt of responsiblity for this. Major structural changes to how the
internet works would be unwise,
and probably open up more control by either the government or
Microsoft. Neither are desireable or
beneficial for the end user. So who really benefits from this FUD
about the internet being broken?
Not too difficult to figure out...
Spam proliferation by Bellinghamster, 12/19/2005 4:52:05 PM
Despite my ISPs efforts to filter emailed spam, my inbasket is
typically less than one-quarter legitimate
message traffic. But purging spam isnt my greatest inefficiency. The
time I spend maintaining firewall,
virus and malware software is the truly significant inefficiency.
New protocols -- we dont use current ones! by Matej, 12/19/2005
9:11:11 PM
Hi,
when this article was mentioned on &quotThe World" (WGBH) they
mentioned that
NSF is planning to release $300M for &quotdevelopment of new protocols
which would make Internet
safe" (and another $300M later for implementation). Why in the world
we need another protocols
when we are not using the current ones? My Linux here has support for
IPv6, S/MIME, etc. etc. but
no-one in the world uses them, because the problem with unsafe
Internet is not in the technology,
but in the organization and social problems (like how to make
everybody identifiable over Internet,
when US public doesnt want to be identified in the first place)?
Matej
Great sales pitch by Mike, 12/20/2005 1:30:05 AM
Isnt one of the best ways to get someone to spend money to instill
fear? Some people would argue
thats how congress is duped into appropriating funds - How close is
Cambridge to DC? :-)
/>If they want to spend $200M, send it my way and Ill demonstrate a
cool solution to make it easier
to deploy new web-based services, to any device, saving major
corporations Billions in the process.
Cheers!
The Internet is in need of repair by Owen N. Martinez, 12/20/2005
5:47:24 AM
Like any system, the I. needs to be tuned-up or repaired as things get
out of control. Who is qualified
to determine what to do, and who should control the system? Preferably
the same entity or two very
close ones, that have the confidence of the majority the users. The US
government need not apply.
hogwash by Si, 12/20/2005 4:31:01 AM
Im a day late on this and notice that artMonster has hit it perfectly.
Big brother wants control.
I would hate to think what the internet would be like if they
redesigned it along the lines suggested.
Hogwash by Fergus Doyle, 12/20/2005 5:39:59 AM
I agree with the other two guys here the problems are down to MS
software - specifically that MS
cannot/will not keep up with changing circumstance, by releasing SW. I
have no spyware on my (Windows)
system and no viruses. eg use Firefox not Internet Explorer use
Thunderbird not Outlook Express
and most of your problems with Windows are solved. Use Linux and you
dont even have to worry this
much.
Its the infrastructure that needs changing by E Feustel, 12/20/2005
6:30:49 AM
Its the routers and the protocols that need changing to permit secure
higher speed operation including
authentication of the traffic on the net -- no more fake IP addresses
and if the packet says that
X sent it, then X did actually send it. No more DNS hacking -- if you
ask for Xs address, you get
Xs address, not Ys. And you get it with the minimum computation in a
reliable manner even with pieces
of the net going down.
Hogwash indeed. by mrxsmb, 12/28/2005 4:30:12 AM
Although hopefully grown ups dont need more alert than &quotpowerpoint
presentation" and "$400
million dollars reseach funding" in close proximity to know that.
The issues highlighted
with MS [the debilitating Operating System, not the debilitating
Physical Affliction] and its usability
over functionality approach are all valid, but other OSs and
applications have their own issues.
/>Of course business could actually pony up the money to build their
own networks and not use the
internet, but then how would that save them money? I believe some
already do, as do Governments and
sensibly so.
One bank in Australia has actually got with the program and realised
they should
issue their on-line banking customers with a swipe and pin security
system the same as on an ATM,
at each and every house. How much of the &quotproblems" discussed
would be solved by this simple
change in attitude?
future of the internet by p, 12/20/2005 8:31:26 AM
The network (as opposed to the endpoints) doesnt need major new
security features.
I admit
largeer TCP ISNs would be good, and SMTP should have a way to reject
mail per-user after the mail
server has read all of it.
Apart from that what you need is security in execution environmensts
(where some of those EEs are OSs and some are browsers etc.).
This is one of several similar
approaches - its no longer adequate to let a program do anything it
chooses. The programs cant be
rusted while handling suspect data. This is a different threat model
from most computer security
work historically.
http://www.google.co.uk/url?sa=U&ampstart=5&ampq=http://www.cs.columbi
a.edu/~smb/papers/subos.pdf&ampe=42
/>
Extensions to existing OS s/w are effective at providing this kind of
security.
http://whitepapers.zdnet.co.uk/0,39025945,60150583p-39000584q,00.htm
/>
Hogwash Support by Dr Hacker, 12/20/2005 10:35:07 AM
artMonster is right on. The royalists from MaBell refuse to give up
their 100+ year monopoly. I say
give it up and become Americans instead of British-like thugs. We dont
want another 1776, but it
looks like we may need one!
Designers did it by Sundararajan Srinivasan, 12/28/2005 5:47:34 AM
Some of the internet bugs we have now has nothing to do with the OS.
It was the way in which it was
designed. For instance, SMTP does not provide authentication by
default. I can pose myself as bill.gates at microsoft.com
with an SMTP server, w/o any problem. This is because the SMTP does
not mind the &quotfrom" address.
The solution can be the usage of digital signature.
Internet and all the related protocols could
have been designed more secure. But it would not have got the same
popularity, as it is now. That
is why, we are now paying security experts to build layers of
security.

Impact of Emerging Technologies: The Internet Is Broken -- Part 3
http://www.technologyreview.com/InfoTech/wtr_16056,258,p1.html
Wednesday, December 21, 2005

The Internet Is Broken -- Part 3

Researchers are working to make the Internet smarter -- but that could
make it even slower, warn experts like Google's Vinton Cerf.

By David Talbot

This article -- the cover story in Technology Review's
December-January print issue -- was divided into three parts for
presentation online. This is part 3; [34]part 1 appeared on
December 19 and [35]part 2 on December 20.

In part 1, we argued (with the help of one of the Internet's "elder
statesmen," MIT's David D. Clark) that the Internet has become a vast
patchwork of firewalls, antispam programs, and software add-ons, with
no overall security plan. Part 2 dealt with how we might design a
far-reaching new Web architecture, with, for instance, software
that detects and reports emerging problems and authenticates users. In
this third part, we examine differing views on how to deal with
weaknesses in the Internet, ranging from an effort at the National
Science Foundation to launch a $300 million research program on future
Internet architectures to concerns that "smarter" networks will be
more complicated and therefore error-prone.

The Devil We Know
It's worth remembering that despite all of its flaws, all of its
architectural kluginess and insecurity and the costs associated with
patching it, the Internet still gets the job done. Any effort to
implement a better version faces enormous practical problems: all
Internet service providers would have to agree to change all their
routers and software, and someone would have to foot the bill, which
will likely come to many billions of dollars. But NSF isn't proposing
to abandon the old network or to forcibly impose something new on the
world. Rather, it essentially wants to build a better mousetrap, show
that it's better, and allow a changeover to take place in response to
user demand.

To that end, the NSF effort envisions the construction of a sprawling
infrastructure that could cost approximately $300 million. It would
include research labs across the United States and perhaps link with
research efforts abroad, where new architectures can be given a full
workout. With a high-speed optical backbone and smart routers, this
test bed would be far more elaborate and representative than the
smaller, more limited test beds in use today. The idea is that new
architectures would be battle tested with real-world Internet traffic.
"You hope that provides enough value added that people are slowly and
selectively willing to switch, and maybe it gets enough traction that
people will switch over," Parulkar says. But he acknowledges, "Ten
years from now, how things play out is anyone's guess. It could be a
parallel infrastructure that people could use for selective
applications."

[[36]Click here to view graphic representations of David D. Clark's
four goals for a new Internet architecture.]

Still, skeptics claim that a smarter network could be even more
complicated and thus failure-prone than the original bare-bones
Internet. Conventional wisdom holds that the network should remain
dumb, but that the smart devices at its ends should become smarter.
"I'm not happy with the current state of affairs. I'm not happy with
spam; I'm not happy with the amount of vulnerability to various forms
of attack," says Vinton Cerf, one of the inventors of the Internet's
basic protocols, who recently joined Google with a job title created
just for him: chief Internet evangelist. "I do want to distinguish
that the primary vectors causing a lot of trouble are penetrating
holes in operating systems. It's more like the operating systems don't
protect themselves very well. An argument could be made, 'Why does the
network have to do that?'"

According to Cerf, the more you ask the network to examine data -- to
authenticate a person's identity, say, or search for viruses -- the
less efficiently it will move the data around. "It's really hard to
have a network-level thing do this stuff, which means you have to
assemble the packets into something bigger and thus violate all the
protocols," Cerf says. "That takes a heck of a lot of resources."
Still, Cerf sees value in the new NSF initiative. "If Dave
Clark...sees some notions and ideas that would be dramatically better
than what we have, I think that's important and healthy," Cerf says.
"I sort of wonder about something, though. The collapse of the Net, or
a major security disaster, has been predicted for a decade now." And
of course no such disaster has occurred -- at least not by the time
this issue of Technology Review went to press.

References

   36. http://www.technologyreview.com/InfoTech/wtr_16056,258,p1.html

The Impact of Emerging Technologies: The Internet Is Broken -- Part 3
http://www.technologyreview.com/InfoTech/wtr_16056,258,p4.html

Discuss

slowing down of Internet by H.M. Hubey, 12/21/2005 10:56:22 AM
Long shift registers (multiple streams if needed for speed) at routers
to catch worms, viruses, Trojan
horses, etc will not slow down the Internet. The bits will be XORed as
they speed along at their
normal speed. The other end of the XOR will be registers that can be
loaded with bit-images of unwanted
pgms (e.g. viruses, etc). It will be a combination of HW and SW. Since
it will be expensive, it will
be best to implement at the routers. If the routers &quotsurrounding"
a country known for spamming
can catch these, it will be harder for this kind of SW to spread all
over the Internet. In effect,
one can quarantine a country so that spam and viruses do not infect
the rest of the Internet.
new internet? by Erik Karl Sorgatz, 12/21/2005 1:02:00 PM
If all the spam and cookies, virus and worm code were cut, wed have
50% more bandwidth! Then a little
blacklist to keep the spammers from gaining access after a 3rd strike
and we might find that the
existing internet is fairly responsive. Tax it? Nah..regulate it? Yes,
perhaps put all the porn garbarage
on its own backbone, with its own domain, and start fresh..it might be
a good idea if the college
kids were only allowed read-only access to USENET for the first six
months too. The commercial interests
should be blocking the known spam-friendly domains, and the
pill-vendors could be held responsible
for their commercial spams too - its a slippery slope, but the end
user shouldnt be required to support
the scum that perpetrate scams and spam.
Long shift registers in routers by Jesse, 12/27/2005 5:53:19 PM
Will not work.
1. You dont always have access to the contents. (encrypted)
2. You
dont always have access to the entire message (incomplete messages)
3. You dont even necessarily
have access to the entire packet (out of order fragmentation delivery)
Check the Security
Focus web site, and read the white paper on router hacking...
You just CANNOT validate
the contents at routers.
The Internet is Broken by Grant Callaghan, 12/21/2005 11:09:25 AM
Its all software -- even the hardware -- and the only question seems
to be, &quotWhere do we put
the fixes?" I think they belong at the end of the process rather than
at the beginning or in
the middle.
Charging a small amount per message would cut down on the spam, say a
fraction
of a penny, and it would generate enough money to police the system,
free up bandwidth and catch
bad hackers simply because the volume of traffic is so large.
The only danger I see to
this is that the government tends to want to feed its cash cows with
ever larger increases in taxation
of any kind. If you let them start taxing the internet, there will be
no end to it.
Encryption? by Aaron, 12/21/2005 12:52:35 PM
I think it is odd that an article about the future of the internet
makes no mention of encryption.
Public key encryption, the ability to know who is saying what, has
existed for longer than I have
been alive.
It also seems that a lot of the original ideas that made the internet
popular,
decentralization and anonymous communication, are lost on its current
inhabitants. My mother could
care less that emails from me are signed, she just wants less spam in
her mailbox.
Interesting idea about access charges by Dmitry Afanasiev, 12/26/2005
6:34:07 AM
http://blog.tomevslin.com/2005/01/voip_spam_and_a.html
Here access means access to user.
Obviously, this needs sender authentication, automatic charging or
balance verification, and probably
some sort of rule-based message cost negotiation (e.g. I want to
deliver this message, but only if
this costs me less than $xy.z). But it makes a lot of sense since
(thanks to Moores law) human time
and attention are now the most scarce and expensive resources on the
Net
Email postage, not so good by B. Curtis, 12/21/2005 1:04:06 PM
Although it seems simple, Mr. Calaghans concept of a small fee per
email is no good in reality. It
would equally penalize legitimate mass-email systems (newsletters,
discussion lists, etc.) as well
as spammers. E.g., there has been talk about sending tsunami warnings
to peoples cell phones via
email Id hardly want to charge the organization millions of dollars
right when theyre trying to save
my life. If the postage were optional (the recipient chooses if the
sender pays), then youre talking
about needing to positively identify both sender and receiver of an
email, which amounts to SSL in
every home. Some have posited using a difficult puzzle to extract a
&quotcost" of sending emails
even though no real money is involved, the same counter-arguments
apply.
No, postage on
email is just one of those fun ideas that just wont work.
Tariffing email by Jim Hayes, 12/21/2005 1:54:45 PM
B. Curtis seems to not be aware that postage of about $0.20 per letter
and who knows what per catalog
does not keep my mailbox at the end of my driveway from getting filled
with junk mail on paper, especially
in the last month.
Legit emaillers would gladly pay a penny per email to interested
recipients
while spammers sending out tens of millions of messages a day to
random addresses - many of whom
seem to illegally use some of my email addresses as return addresses
by the way - would be put to
rest.
By law, 911 calls are toll-free.
The issue of billing is easy - include 1000 emails
per month in an account from an ISP, so only the excess is billed, so
few users will even need to
be billed.
BTW, I do know companies who have limited access to the Internet for
employees because
of overloads of viruses and spam, as well as abuses in downloading
inappropriate material - I fired
an employee myself for storing his downloaded porn on a company
computer.
Parallel Internet by Khushnood Naqvi, 12/28/2005 3:27:48 AM
The idea of having a parallel Internet is good. The parallel Internet
can be implemented on the next
generation of protocols - all with authentication (through digital
certificates) and the like. And
will have no spam. Commercial sites would perhaps like to have a
presence on the more secure Internet.
Users also wont mind to connect to a different Internet for things
like Banking, or any business
transaction for that matter. Even if users have to pay a slightly
higher amount for that one it will
be a success.
But the only problem, I see with that one is that the Internet in the
current
form will be abondoned and so become more hazardous for people who
continue to rely on this one.
Press Re-Start button by 666, 12/21/2005 3:04:41 PM
The core problem is that Internet, like its underlying software is
becoming legacy and is an institution.
/>
The problem with all software is that underlying software is hard and
unmaintenable instead
being soft and flexible.
This will be rectified by my chosen acronym.
Security vs privacy by Jose I. Icaza, 12/23/2005 9:40:04 PM
Can we trust a government (NSF) initiative to design a more secure
internet that nevertheless makes
government and corporate tracking of individual users and their data
at least as difficult as the
present internet?

The Impact of Emerging Technologies: Click "Oh yeah?"
http://www.technologyreview.com/InfoTech/wtr_15999,258,p1.html
Dec. 2005/Jan. 2006

How the Web's inventor viewed security issues a decade ago.

By Katherine Bourzac

As part of a larger proposed effort to rethink the Internet's
architecture (see "[27]The Internet Is Broken"), Internet elders such
as MIT's David D. Clark argue that authentication -- verification of
the identity of a person or organization you are communicating with --
should be part of the basic architecture of a new Internet.
Authentication technologies could, for example, make it possible to
determine if an e-mail asking for account information was really from
your bank, and not from a scam artist trying to steal your money.

Back in 1996, as the popularity of the World Wide Web was burgeoning,
Tim Berners-Lee, the Web's inventor, was already thinking about
authentication. In an article published in July of that year,
Technology Review spoke with him about his creation. The talk was wide
ranging; Berners-Lee described having to convince people to put
information on the Web in its early years and expressed surprise at
people's tolerance for typing code.

But he also addressed complaints about the Web's reliability and
safety. He proposed a simple authentication tool -- a browser button
labeled "Oh, yeah?" that would verify identities -- and suggested that
Web surfers take responsibility for avoiding junk information online.
Two responses are excerpted here.

>From Technology Review, July 1996:

TR: The Web has a reputation in some quarters as more sizzle than
steak -- you hear people complain that there's no way of judging the
authenticity or reliability of the information they find there. What
would you do about this?

Berners-Lee: People will have to learn who they can trust on the Web.
One way to do this is to put what I call an "Oh, yeah?" button on the
browser. Say you're going into uncharted territory on the Web and you
find some piece of information that is critical to the decision you're
going to make, but you're not confident that the source of the
information is who it is claimed to be. You should be able to click on
"Oh, yeah?" and the browser program would tell the server computer to
get some authentication -- by comparing encrypted digital signatures,
for example -- that the document was in fact generated by its claimed
author. The server could then present you with an argument as to why
you might believe this document or why you might not.

  ...Another common gripe is that the Web is drowning in banal and
useless material. After a while, some people get fed up and stop
bothering with it.

To people who complain that they have been reading junk, I suggest
they think about how they got there. A link implies things about
quality. A link from a quality source will generally be only to other
quality documents. A link to a low-quality document reduces the
effective quality of the source document. The lesson for people who
create Web documents is that the links are just as important as the
other content because that is how you give quality to the people who
read your article. That's how paper publications establish their
credibility -- they get their information from credible sources....You
don't go down the street, after all, picking up every piece of paper
blowing in the breeze. If you find that a search engine gives you
garbarage, don't use it.



More information about the paleopsych mailing list