SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Symantec (SYMC) - What does it look like? -- Ignore unavailable to you. Want to Upgrade?


To: astyanax who wrote (1691)1/6/2000 4:25:00 PM
From: Elmer Flugum  Respond to of 2069
 
A thousand ills require a thousand cures

economist.com

Researchers are borrowing from immunology to improve the security of computer networks



THE world's computers seem to have survived the threat of acute, millennium-induced
failure as their clocks ticked happily into the new year. But two chronic and altogether more
sinister hazards to their health persist: viruses and malicious hackers.

Last month David Smith, the creator of a particularly nasty virus called Melissa, pleaded
guilty to causing $80m of damage to American businesses. This figure, however, was the
result of a plea-bargain: the true cost of the damage is probably closer to $400m in America
alone. According to Computer Economics, a consultancy based in Carlsbad, California,
computer viruses cost the world $12.1 billion in clean-up costs and lost productivity during
1999. In one incident, for example, a manufacturing plant operated by Dell, a computer
maker, was disrupted for two days by a virus outbreak.

Meanwhile, protecting institutional networks from attacks by external hackers is thought to
account for 2.5% of global spending on information technology?in other words about $25
billion. When a company?s network security is breached, the standard response is to
disconnect that network from the Internet until the problem has been fixed. But as more and
more firms come to rely on Internet links with their suppliers and customers, this becomes
ever more painful and costly.

Send in the biologists


Evidently, a new approach to computer security is needed. And two groups of researchers
believe they have found one. To prevent computers from succumbing to viruses and other
network-borne horrors, they are borrowing ideas from immunology, and building digital
immune systems.

Giving a computer the ability to fight off infectious agents sounds odd, but it makes a lot of
sense. Indeed, the parallels between information technology and molecular biology are
striking. Computer software is often likened to DNA, and vice versa; and computer viruses,
like biological ones, are nasty strings of code that exploit their hosts to replicate themselves
and cause trouble.

Anti-virus software has long played up its medical overtones, with talk of vaccination and
inoculation, and logos involving syringes. Yet its immunological nature is only skin-deep.
Current programs usually scan for fragments of code that identify particular, known
viruses, and eradicate them when found. Real immune systems are far more complex. They
are also robust, fault-tolerant and able to respond quickly to an intruder?all qualities that
are desirable in a digital immune system, too.

Just such a system was formally proposed in 1992 by Stephanie Forrest, of the University
of New Mexico, and Alan Perelson of the Los Alamos National Laboratory, also in New
Mexico. But the idea remained merely theoretical for many years. That is now changing.
IBM and Symantec, a software company located in Cupertino, California, have started
testing a commercial anti-virus package modelled on the immune system. Meanwhile, Dr
Forrest?s group has constructed an experimental set-up, described in a paper recently
submitted to the Evolutionary Computation Journal, that mimics the way real immune
systems identify invading pathogens. This should detect unauthorised access to a
computer network.

The IBM software, developed at the company?s Thomas J. Watson Research Centre in New
York state by a team led by Steve White, is called the Digital Immune System. It works by
exploiting computer networks to speed up the process of identifying and eradicating
viruses. In fact, it is the growing use of networks that has caused the problem to get so bad
in the first place. So the idea, according to David Chess, a member of the IBM team, is to
enable the cure to spread as quickly and easily as the disease.

The Digital Immune System works like this. Normally, when anti-virus software installed on
a personal computer (PC) detects a suspected but unknown virus that it cannot handle, it
sounds an alarm and waits for human operators to fix the problem. A PC with a Digital
Immune System installed, by contrast, automatically hands the suspect file over to a central
location for analysis. Here the file is scrutinised and then used to infect an isolated network
of PCs, which are automatically tweaked in order to trigger the virus?if that it be. Once a
virus has become active, the behaviour of the infected PCs is monitored so that two things
can be worked out: a signature by which to identify the virus in future, and an antidote to
counteract it.

The signature and the antidote are then tested on the original suspect file, and on the
triggered copies of the virus, before being passed back to the PC that first reported the
problem. Copies of the signature and the antidote are also spread around the Internet so
that they are ready for use when other computers report the infection. The entire process is
designed to happen automatically over the network, without the need for human
intervention?just like the spread of a virus.

So much for theory. But, although IBM has demonstrated the whole system in prototype
form, it would be foolhardy to plug it into the Internet and expect it to work straight away.
Instead, it is being turned on one stage at a time. A trial currently under way at several large
firms enables suspect files to be passed automatically to a system administrator, who can
then decide whether or not to pass them on in turn to the analysis centre. Meanwhile, virus
experts at Symantec are using the IBM software to supplement existing techniques for
identifying viruses and concocting antidotes to them. There are, in other words, human
operators checking that everything is working as it should. But the plan is for the system to
be switched to automatic operation over the coming months.

Dr Chess is the first to admit that the immunological analogy is closer in some respects than
in others. While the detection of viruses is distributed over the Internet, for example, just as
the cells of the immune system are spread throughout the body, there is no biological
equivalent of the central analysis centre. But a computer, with its constantly changing
software, is not like an organism, whose DNA is fixed. Slavishly sticking to the biological
analogy and wiping out all new programs would not be a good idea.

In the case of Dr Forrest?s Artificial Immune System, however, the analogy with biology is
far stronger. Her software, developed in conjunction with Steven Hofmeyr, a graduate
student, detects inappropriate behaviour on a network using software ?antibodies? that
latch on to anything suspicious.

Immune systems are designed to recognise and destroy foreign bodies. To do this they
have to know what is, and is not, ?foreign?. That is achieved by a process called negative
selection. Lymphocytes, the immune system?s principal cells, are created at random in the
thymus gland and in the bone marrow, but only those that do not react with any of the
body?s naturally occurring molecules are released into the blood to search for intruders.
The rest are destroyed.

In Dr Forrest?s system, every packet sent across the network is examined by stringing
together the address of the sender, the address of the receiver, and the ?port number? on
which they are communicating to make a string of 49 binary digits (bits). These strings, the
equivalent of naturally occurring molecules in a body, are compared to a pool of randomly
generated 49-bit strings called detectors, the equivalent of the randomly generated
lymphocytes. If a detector has more than a certain number (currently 12) of contiguous bits
in common with a passing packet, the detector is deleted, and a new detector is generated
to replace it. Detectors that survive for two days without matching any packets on the
network are deemed to be different enough from legitimate strings to be likely to match only
foreign invaders.

These strings are then used to detect deviations from the conditions in which they were
originally selected. If they fail to match any network traffic within another seven days, they
are deleted as redundant; but if they match more than a certain number of packets they
trigger an alarm. At this point a human operator decides whether the unusual behaviour
detected is permissible. If it is, the detector is deleted. Otherwise the detector is made
immortal, so that subsequent, similar intrusions can be detected. This is similar to the way
that so-called ?memory B-cells? operate in real immune systems; once a host has been
exposed to a particular infection, it can be recognised again.

That may sound a rather haphazard approach to network security, as detectors are
randomly generated and die out from time to time. But Dr Forrest?s preliminary tests
suggest that her immunological approach is surprisingly effective. The rate of false alarms
was significantly lower than that of competing systems, and when the system was
presented with simulated attacks, it detected all of the common security breaches it was
tested against.

Dr Forrest plans to extend her approach to other aspects of computer security?in
particular by automating the response to an alarm, rather than relying on human
intervention. And, she suggests, computer-security specialists will increasingly borrow
from biology as software continues to become more complicated. Rather than treating
software and networks with a rigid, engineering mindset, she thinks it is more realistic to
see them as ecosystems. As a result, she suggests that large programs such as operating
systems should be made in such a way that no two copies are exactly alike. Monocultures
are, after all, more susceptible to being wiped out by disease than mixed fields.

Similar sentiments are expressed by the researchers at IBM. Dr Chess speculates, for
example, that bug fixes to existing programs might be delivered automatically in the future
as tiny patches, in a manner reminiscent of gene therapy.

No doubt the virus-writers and hackers will find new ways to attack these novel defence
mechanisms, just as micro-organisms can evolve their way around immune responses. But
digital immune systems should make it easier to keep up with this evolution by
automatically devising new forms of defence, as happens in a real immune system. And by
exploiting the Internet to distribute the cures to computer diseases, such systems offer a
promising way to prevent outbreaks from becoming epidemics.




To: astyanax who wrote (1691)1/7/2000 3:47:00 PM
From: jbe  Read Replies (1) | Respond to of 2069
 
Extraordinary that you should ask! I just read your analysis, which is why I popped in here. To think that the very person I should like to ask some questions of is here!

First of all, let me backtrack. On the very shaky assumption that the market will someday actually become rational, I ran a Telescan search today looking for companies with high ROE, high EPS rank, plenty of free cash flow, low debt, and relatively low p/e ratios.

The search turned up two software companies: Compuware & Symantic. I then went over to Morningstar, and looked up the analyses of both companies.

On the whole, I really like the Morningstar analyses. They do an excellent job of explaining a company's strategy, what its strong & weak points are, and its position/prospects in its industry. BUT -- I don't get the grading system.

First of all, I think that some analysts, like teachers, may be easy graders, while others are tougher graders. Some analysts seem to grade a company in relation to its particular industry; others seem to grade it in relation to the S&P. Or perhaps the problem is that the analyses and/or grades are not dated(they should be!).

Just how do you explain, for example, why brokerage firm SWS, with a p/e of 12.5 (as against an industry average of 21.5) gets a failing grade (F) for valuation (!)?

CPWR, on the other hand, quite properly gets an "A" for valuation, with a p/e more than twice as high -- 28 -- versus an industry average of 78.9. (Since the CPWR analysis was written, CPWR's p/e has fallen to 24.6.) Yet you give SYMC only a C+ for valuation, even though when you graded it it had a p/e of 28 -- exactly the same p/e that Compuware had when it was graded, and its price/sales and price/book ratios were both lower than Compuware's!

Sorry, I don't get it.

Much as I like Morningstar in general, I don't get its other rating systems. either. That is to say, they conflict with each other, as if Morningstar had a really bad case of multiple personality.

Let's take the "business appraisal ratio," for example. Anyone who takes it seriously really ought to sell out right now, and wait for Armageddon in the stock market. Everything is overvalued!

Let's take SYMC. In the financial write-up, Morningstar gives SYMC a peg ratio of 1.0, meaning it is fairly valued. But its appraisal ratio is 0.5 -- which means it is overvalued by about 50!! Then what about all the other software stocks out there? Look out below!!!!!!

Then, of course, you also have analysts who don't appear to give a hoot or a holler about such old-fashioned fetishes as peg ratios, let alone business appraisals. What is Morningstar's top purchase recommendation for 2000? EMC Corp -- with a p/e of 96, a peg of 3.0, and a business appraisal of 0.2 (80% overvalued)! I know, I know, for top companies one should be prepared to pay a discount, but that is some discount!!!!

Morningstar has a lot of good data -- including your analysis. But what the heck is its philosophy? Darned if I can figure it out.