Discussion:
Install INTEL C-Compiler and rebuild the COMPLETE System
Rembrandt
2003-06-25 21:59:28 UTC
Permalink
The C-Compiler by Intel is free for personal use (for LINUX).
How could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin.
The article deal with the installation on FreeBSD but I don't know more. :o)

Has somebody installed the C-Compiler by Intel?
May be somebody could help me?

And: How to rebuild the COMPLETE Systeme from the source?
I've an Notebook with an Pentium4 and I would use optimized binarys (or
binaries? Don't know, I'm sorry :o) ). :o)


Sebastian
Eduardo Augusto Alvarenga
2003-06-25 22:03:56 UTC
Permalink
The C-Compiler by Intel is free for personal use (for LINUX). How
could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin. The article deal with the installation on FreeBSD
but I don't know more. :o)
What the heck is this Intel C-Compiler?
Has somebody installed the C-Compiler by Intel? May be somebody could
help me?
What type o C code does this package compiles? Doesn't gcc2/3 works for
you?
And: How to rebuild the COMPLETE Systeme from the source? I've an
Notebook with an Pentium4 and I would use optimized binarys (or
binaries? Don't know, I'm sorry :o) ). :o)
I suppose you're looking for the -stable branch right?
Please take a look at:

http://www.openbsd.org/stable.html
http://www.openbsd.org/de/stable.html

The secret is in the Makefile.

Best Regards,

- --
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Eduardo A. Alvarenga - Analista de Suporte #179653
Centro Estratégico Integrado - SEGUP-PA
Belém, Pará - (91) 259-0555 / 8116-0036
eduardo@{thrx.dyndns.org,cei.ssp.pa.gov.br}
- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
OpenBSD Consultant: www.openbsd.org/support.html
Dave Feustel
2003-06-25 22:13:17 UTC
Permalink
Post by Eduardo Augusto Alvarenga
The C-Compiler by Intel is free for personal use (for LINUX). How
could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin. The article deal with the installation on FreeBSD
but I don't know more. :o)
What the heck is this Intel C-Compiler?
http://www.intel.com/software/products/compilers/index.htm?iid=ipp_home+software_compiler&
Steve Shockley
2003-06-26 14:28:24 UTC
Permalink
Post by Eduardo Augusto Alvarenga
What the heck is this Intel C-Compiler?
http://www.intel.com/software/products/compilers/index.htm?iid=ipp_home+software_compiler&

On their site they tout its performance improvements over gcc 3.2. I read a
thread on tech@ that mentioned that 3.2 was really slow, although I'm not
sure if that was just compile speed or the speed of the resulting
executable. Does anyone know if it's even any faster than gcc 2.95?
Henning Brauer
2003-06-26 15:17:30 UTC
Permalink
Post by Dave Feustel
Post by Eduardo Augusto Alvarenga
What the heck is this Intel C-Compiler?
http://www.intel.com/software/products/compilers/index.htm?iid=ipp_home+software_compiler&
On their site they tout its performance improvements over gcc 3.2. I read a
sure if that was just compile speed or the speed of the resulting
executable. Does anyone know if it's even any faster than gcc 2.95?
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
--
http://2suck.net/hhwl.html - http://www.bsws.de/
Unix is very simple, but it takes a genius to understand the simplicity.
(Dennis Ritchie)
Chuck Yerkes
2003-06-30 19:35:05 UTC
Permalink
Post by Henning Brauer
Post by Dave Feustel
Post by Eduardo Augusto Alvarenga
What the heck is this Intel C-Compiler?
http://www.intel.com/software/products/compilers/index.htm?iid=ipp_home+software_compiler&
On their site they tout its performance improvements over gcc 3.2. I read a
sure if that was just compile speed or the speed of the resulting
executable. Does anyone know if it's even any faster than gcc 2.95?
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.

How fast they RUN is, often, important (of often its not,
really. the 10/90 & 90/10 rules apply here).

I suspect, however, that an Intel compile, license not
withstanding, will not be a good fit for a multiplatform OS.
If it's wonderful and open, then perhaps using it to make gcc
better would be a fine task - for the gcc maitainer group.
Just as EGCS lit a fire under Mr Stallman's ass, perhaps this
will as well.
Peter Valchev
2003-06-30 20:13:00 UTC
Permalink
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.

Ignoring the other things, this will significantly slow down the
snapshots process, as well as the release process. (Think 2000
ports building on m68k/vax/sparc)
Hugo Villeneuve
2003-06-30 21:19:06 UTC
Permalink
Post by Peter Valchev
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.
Ignoring the other things, this will significantly slow down the
snapshots process, as well as the release process. (Think 2000
ports building on m68k/vax/sparc)
Does cross-compiling will be sought and could it be trusted enought
so that building native package on each architecture won't be needed?

I thought that was one of the benefit NetBSD got from moving all
their platforms to ELF and gnu binutils.

As for those old platforms, I do hope Theo's compiling parc has
better hardware than an VAXstation 4000/VLC. A 4000/9[06] would be
better choices.

<plug>
I kinda wrote a table on OpenBSD supported VAX and VUP rating. The
VLC doesn't stand very high in VUP numbers.
http://eintr.net/openbsd/openbsd-supported-vax-vup-perf.txt
</plug>
--
Hugo Villeneuve <***@EINTR.net>
http://EINTR.net/
Peter Valchev
2003-06-30 21:26:08 UTC
Permalink
Post by Hugo Villeneuve
Does cross-compiling will be sought and could it be trusted enought
so that building native package on each architecture won't be needed?
No, how would you detect for example a newly introduced UVM bug that
only shows up when building a certain package? Personally I also think
of building packages as a huge regress set for the system.
Post by Hugo Villeneuve
I thought that was one of the benefit NetBSD got from moving all
their platforms to ELF and gnu binutils.
No comment... as you read above, I do not consider this a benefit
Post by Hugo Villeneuve
As for those old platforms, I do hope Theo's compiling parc has
better hardware than an VAXstation 4000/VLC. A 4000/9[06] would be
better choices.
I was just giving an example.
Henning Brauer
2003-06-30 22:09:47 UTC
Permalink
Post by Peter Valchev
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.
Ignoring the other things, this will significantly slow down the
snapshots process, as well as the release process. (Think 2000
ports building on m68k/vax/sparc)
a major point i see is that it slows down development dramatically too
if you have to wait longer and longer and longer for your recompiled
kernel after a twoline change...
--
http://2suck.net/hhwl.html - http://www.bsws.de/
Unix is very simple, but it takes a genius to understand the simplicity.
(Dennis Ritchie)
Chuck Yerkes
2003-06-30 22:24:45 UTC
Permalink
Post by Henning Brauer
Post by Peter Valchev
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.
And is this vax still running out of geek-macho or is it running
for a practical reason (eg. the site I know running Sun3's with
SunOS 3.5 because the VME cards in them are running a production
line and there's really little incentive to change - the OS loads
info onto the card, says "go" and goes to sleep for 6 hrs.
It would be dandy to run OBSD on them, but still, the OS will be
in use for about 40 minutes/day while the VME cards are controlling
motors and sensors and other machine tasks.

I have less tolerance for complaints about "my 12 year old (or even
5 year old)" machine takes too long to build. Perhaps this would
be a motivation to get cross compiling to work - imagine the Sun
4c build being done on a 3GHz Xeon. OTOH, it's rare, outside of
embedded systems, to *really* need cross compiling.

If you're running BSD on a fully depreciated machine like a VAX
and it's not because you are bound to hardware attached to it, then
you'll likely save the money you spend on electricity and cooling
by getting a new box.
Post by Henning Brauer
Post by Peter Valchev
Ignoring the other things, this will significantly slow down the
snapshots process, as well as the release process. (Think 2000
ports building on m68k/vax/sparc)
I don't too often. I think of base & comp for snapshots and build
ports from source on demand, locally.
Post by Henning Brauer
a major point i see is that it slows down development dramatically too
if you have to wait longer and longer and longer for your recompiled
kernel after a twoline change...
Perhaps more kernel code could go into modules ;)
Change a module, rebuild the module only.


Are we talking order of magnitude or 3-4%?
The latter is mitigated by Moore's Law.
The former ... ya know, if Mac Classic ][ doesn't get snapshots
as often as platforms that are actually in wide use, then I
can survive. My Sparcs are viable. The Mac 68k's run OBSD
as a novelty.
I begin to wonder what real need there is to support it.
Henning Brauer
2003-06-30 22:43:00 UTC
Permalink
Post by Chuck Yerkes
Post by Henning Brauer
Post by Peter Valchev
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.
And is this vax still running out of geek-macho or is it running
for a practical reason
I have a Sun IPX running productive, because, well, it's up to the
task, perfectly reliable and I see no reason to change it.
I have a Pentium 60 running productive, because, well, it's up to the
task, perfectly reliable and I see no reason to change it. And I need
the ISA slot in it.
I'm running an old alpha 21066 because, well, see above.
while these aren't the fastest machines of each arch and build can be
done elsewhere, the same holds true for many other archs.
A compile time increase by 10% (and we're talking something about
50-100% with gcc3, and far more with intel cc) would already be
majorly annoying on the fastest sparcs. We don't want to drop sparc,
right.
Post by Chuck Yerkes
I have less tolerance for complaints about "my 12 year old (or even
5 year old)" machine takes too long to build.
honestly, I don't think anyone cares about your tolerance for that.
Post by Chuck Yerkes
Perhaps this would
be a motivation to get cross compiling to work
humbug.
we don't wanna end up with 50% of our supposedly supported arches not
working on the real hardware.
mkr is a good stability test.
ports builds are as well. perhaps the are even a bit better ones.

not mentioning that, when doing hardware support hacking, changing to
another machine for each compile is totally inacceptable.
Post by Chuck Yerkes
If you're running BSD on a fully depreciated machine like a VAX
and it's not because you are bound to hardware attached to it, then
you'll likely save the money you spend on electricity and cooling
by getting a new box.
humbug.
old machines aren't neccessarily power- and heatsinks.
a 3GHz P4 is by definition.
Post by Chuck Yerkes
Post by Henning Brauer
a major point i see is that it slows down development dramatically too
if you have to wait longer and longer and longer for your recompiled
kernel after a twoline change...
Perhaps more kernel code could go into modules ;)
Change a module, rebuild the module only.
oh yes, sure. sure. think this trough.
Post by Chuck Yerkes
Are we talking order of magnitude or 3-4%?
magnitudes.
--
http://2suck.net/hhwl.html - http://www.bsws.de/
Unix is very simple, but it takes a genius to understand the simplicity.
(Dennis Ritchie)
Chuck Yerkes
2003-06-30 23:13:31 UTC
Permalink
Post by Henning Brauer
Post by Chuck Yerkes
Post by Peter Valchev
Post by Chuck Yerkes
Post by Henning Brauer
gcc 2.95.3 code is in most cases slightly slower than gcc 3.x code,
and intel code is even faster.
compile times with gcc 2.95.3 are by magnitudes shorter than with
gcc 3.x and the intel compilers are even slower.
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
Other windows??? Other machines and CPUs too, what's your point?
It takes my VLC vax around 2 weeks to do a make build, I don't want to
imagine how long it will take using gcc3.
And is this vax still running out of geek-macho or is it running
for a practical reason
I have a Sun IPX running productive, because, well, it's up to the
task, perfectly reliable and I see no reason to change it.
Me too. Except the SPARC 10 uses the same amount of power and
makes the same amount of head.
I build for the Sun4Cs on a 166MHz Sparc 10.
It takes less time and I get the binaries I need.
Post by Henning Brauer
I have a Pentium 60 running productive, because, well, it's up to the
task, perfectly reliable and I see no reason to change it. And I need
the ISA slot in it.
Similiar. But I dumped my P90 and have a P300 (with ISA with a 16port
serial card and an ISA LCD 80x35 "head"). I have a 50MHz 486 around
too. If I can get it > 8MB of RAM, it will run usefully. Been on for
4 hrs, once in the last 2 years.
I compile for them on a Athlon 2400 box.
It takes an hour or so and I get the binaries I need.
Post by Henning Brauer
I'm running an old alpha 21066 because, well, see above.
Hey! Me too. Actually two DECs. Lots of heat and noise.
An Athlon 900 was faster (than both of them together). So I
turn them on occasionally and build on them. Usually kick off
the build at night and it's done in the AM.
Post by Henning Brauer
while these aren't the fastest machines of each arch and build can be
done elsewhere, the same holds true for many other archs.
A compile time increase by 10% (and we're talking something about
50-100% with gcc3, and far more with intel cc) would already be
Wow! 50% is a bad effect.
Post by Henning Brauer
majorly annoying on the fastest sparcs. We don't want to drop sparc,
right.
Well, 4 and 4c look ready to drop by the wayside. But I suppose
there has to be some good reason to keep some of them running.
Post by Henning Brauer
Post by Chuck Yerkes
I have less tolerance for complaints about "my 12 year old (or even
5 year old)" machine takes too long to build.
honestly, I don't think anyone cares about your tolerance for that.
Okay, then I'll up it:
If you're running a 12 year old machine for anything other than
geek macho or a hardware mandate, then you're both foolish
and cheap. Cheap in that what you spend on power and cooling
in a year for this would get you a more modern machine.
Foolish in that you're spending way more effort propping up
the old machine that you would a newer one.

I'll recall a client of a friend who was still running an SGI 3030
in 1996 because they still had it. With lots of efforts, she made
them buy an Indy and replace the 3030 with it. She did this by
showing that they were basically supporting it with 1/3 a person
("I just spend a few days a month working on it") and had
spent more than an Indy over only the previous 3 years to keep
it alive. For a new machine, they lowered support costs and
moved the workstation from 12MHz (and about 5amps) to 300Mhz
(and about 2AMPs). And the disks went from 2x100MB (loud) + NFS
to a 2 local 4GB disks - meaning the network load dropped a bunch.
Or that their network capacity went up by 20%. When printing full
color A4s on a network Iris printer, that meant time which meant
$$ for them.

Spent: $4000, saved: at least $12000.


In SPARC-land, Ultra2's go for a couple hundred dollars/euros. My
IPX (33MHz) with an external disk box was using about $15/month in
power.
My Athlon 900 cost me $250 with a premium Antec power supply and
lots of high quality DDR RAM. A tad more than 17 months of power.
I didn't match the 64MB of RAM of the IPX and I got a 400 Watt PS
when I could have used less. (but then I stuff a Radeon into it).

It builds the whole tree in a couple hrs.


The same math, accounting and business case:
For a bonus for your bosses, the numbers coming out suggest that
a 17" LCD display will pay for itself in power and heat savings in
under a year. While LCDs may not be appropriate for people living
in Photoshop and who need very accurate color, it's just fine for
the other 99% of us.
Post by Henning Brauer
Post by Chuck Yerkes
Perhaps this would
be a motivation to get cross compiling to work
humbug.
we don't wanna end up with 50% of our supposedly supported arches not
working on the real hardware.
Nope. But if a 2GHz P4 could build VAXland in 60 minutes and the
tarballs worked, then you have both real hardware to test on and
can do rebuilds on a $800 machines. 10 rebuilds, if you were
waiting for them, would save the cost of the machine (figuring
VAXen take 4 hrs more and your time is worth at least $20/hr).
Post by Henning Brauer
mkr is a good stability test.
ports builds are as well. perhaps the are even a bit better ones.
not mentioning that, when doing hardware support hacking, changing to
another machine for each compile is totally unacceptable.
well, if the rebuild is 800% faster, acceptable may change.
Post by Henning Brauer
Post by Chuck Yerkes
If you're running BSD on a fully depreciated machine like a VAX
and it's not because you are bound to hardware attached to it, then
you'll likely save the money you spend on electricity and cooling
by getting a new box.
humbug.
old machines aren't neccessarily power- and heatsinks.
a 3GHz P4 is by definition.
Yeah, as much as my Alpha DECStation. And it's a lot slower
that the 3GHz P4
Post by Henning Brauer
Post by Chuck Yerkes
Are we talking order of magnitude or 3-4%?
magnitudes.
And there is the answer. It's too slow. With numbers available,
I'll readily concur.
Peter Valchev
2003-06-30 23:08:33 UTC
Permalink
Post by Chuck Yerkes
And is this vax still running out of geek-macho or is it running
for a practical reason
I am using it, and have discovered & fixed numerous issues in OpenBSD
and in the OpenBSD ports tree by using this machine. Why there? See
below
Post by Chuck Yerkes
I have less tolerance for complaints about "my 12 year old (or even
5 year old)" machine takes too long to build. Perhaps this would
Your tolerance is irrelevant here, I am not sure if you realise that.
Post by Chuck Yerkes
be a motivation to get cross compiling to work - imagine the Sun
4c build being done on a 3GHz Xeon. OTOH, it's rare, outside of
embedded systems, to *really* need cross compiling.
cross-compiling is a joke. then you end up with broken stuff that you
ship, just look at NetBSD. we're not going to go that way, even if it
means "supporting" less architectures, we will at least support them.
building on a particular architecture is like a regression test, as i
said already
Post by Chuck Yerkes
If you're running BSD on a fully depreciated machine like a VAX
and it's not because you are bound to hardware attached to it, then
you'll likely save the money you spend on electricity and cooling
by getting a new box.
The VLC is 1/4 the size of a sparc pizza box, a lot less weight (when I
got mine, I carried it for 15 blocks under my arm without feeling it)
and it has a VERY small power supply. Saving electricity would actually
be a good reason for using it...
Post by Chuck Yerkes
Post by Henning Brauer
Post by Peter Valchev
Ignoring the other things, this will significantly slow down the
snapshots process, as well as the release process. (Think 2000
ports building on m68k/vax/sparc)
I don't too often. I think of base & comp for snapshots and build
ports from source on demand, locally.
Well, you are irrelevant.

Who makes the CDs you buy? Who makes the stuff that gets updated in
ftp.../snapshots/? Think of those people instead.
Post by Chuck Yerkes
Post by Henning Brauer
a major point i see is that it slows down development dramatically too
if you have to wait longer and longer and longer for your recompiled
kernel after a twoline change...
Perhaps more kernel code could go into modules ;)
Change a module, rebuild the module only.
You sound very clueless
Post by Chuck Yerkes
The former ... ya know, if Mac Classic ][ doesn't get snapshots
as often as platforms that are actually in wide use, then I
can survive. My Sparcs are viable. The Mac 68k's run OBSD
as a novelty.
I begin to wonder what real need there is to support it.
Are you really that selfish? Let me remind you: you are not alone in
this world, think of others who -do- have the need!
Chuck Yerkes
2003-07-01 01:06:38 UTC
Permalink
Post by Peter Valchev
Post by Chuck Yerkes
And is this vax still running out of geek-macho or is it running
for a practical reason
I am using it, and have discovered & fixed numerous issues in OpenBSD
and in the OpenBSD ports tree by using this machine. Why there? See
below
You are confusing cause and effect. See below (bottom).
Post by Peter Valchev
cross-compiling is a joke. then you end up with broken stuff that you
ship, just look at NetBSD. we're not going to go that way, even if it
means "supporting" less architectures, we will at least support them.
building on a particular architecture is like a regression test, as i
said already
So rather than use regression tests, you hope that a port might trigger
problems? Computer scientology.
Post by Peter Valchev
The VLC is 1/4 the size of a sparc pizza box, a lot less weight (when I
got mine, I carried it for 15 blocks under my arm without feeling it)
and it has a VERY small power supply. Saving electricity would actually
be a good reason for using it...
Hmmm, I've been carrying a 12W Soekris to/from work. It fits in my
motorcycle jacket pocket. Does that make me more of a man? No.
(but I still build for it on a 2 CPU 2GHz Intel box). (and no,
you don't see the letters OpenBSD in that paragraph, but that's for
reasons of hardware support).
It may be faster than your VAX and it lasts something like 4 days on
a small UPS.
Post by Peter Valchev
Well, you are irrelevant.
Yawn.
Post by Peter Valchev
Who makes the CDs you buy? Who makes the stuff that gets updated in
ftp.../snapshots/? Think of those people instead.
Exactly who I *am* thinking of. Of course, if it's a big manual
process to do the builds (not catch and track down failures) then
"those people" are simply line workers ala McDonalds.

...
Post by Peter Valchev
Post by Chuck Yerkes
Perhaps more kernel code could go into modules ;)
Change a module, rebuild the module only.
You sound very clueless
Yawn, again. Your nintendo must be broken little boy.
Post by Peter Valchev
Post by Chuck Yerkes
The former ... ya know, if Mac Classic ][ doesn't get snapshots
as often as platforms that are actually in wide use, then I
can survive. My Sparcs are viable. The Mac 68k's run OBSD
as a novelty.
I begin to wonder what real need there is to support it.
Are you really that selfish? Let me remind you: you are not alone in
this world, think of others who -do- have the need!
And you have yet to describe what need for this there actually is.

Perhaps it's like the west's food imperialism in the 3rd world:
west: We will help you grow wheat.
3w: hurray! we can eat wheat and make bread!
west: No, you must feed the wheat to those cows
and lots and lots of water.
3w: ok, we've fed the cow 2000 lbs of wheat and thousands and
thousands of gallons of water, now what?
west: Now you may have the 400 pounds of beef!

The sight that's running business critical system on hardware
that requires a VME machine, or hardware that only works with a
VAX or nubus? It's makes tons of sense to run older hardware.
There is a TON of scientific measurement stuff from national
that runs really well in a Mac IIci and would cost a fortune
to reproduce for PCI. That makes sense.

The only other scenario that jumps to mind are the hobbyists,
those who want to learn and have a scrap machine that's around
to play with BSD on.

But business reasons? While an IPX may run a home firewall fine,
at 12 years old, I *expect* the RAM and the power supply to die.
Each day it doesn't is a little mizvah. If you're running
a business on this kind of hardware to save money, then you're
not likely going to be successful at that point. I've worked for
folks who've spent THOUSANDS in less obvious costs to save hundreds
of dollars. And they do it everywhere: My manager who insisted
I take the subway to a client for an hour each way rather than
spend $30 and 30 minutes in a taxi, when we were charging
$100/hr for the time on side (that extra hour of non-billable
time paid for a cab 3 times).

As I said, being cheap costs more.

Doing it for geek macho is fine. I have 2 working Apple ][s,
I have a working Kaypro "portable" CPM machine. I have Alphae,
and SPARCstation 1s. But I don't really run them much.

I won't cry that a total build on the SPARC 1 takes 3+ days.
I won't cry if, suddenly, the BSD's stop working on it. It
became junk when Solaris 2.5 came out - production quality
and unusable on a Sparc 1.

Frankly, there are more productive uses of time than keeping an
irrelevant machine going (and see the description of irrelevant a
couple paragraphs up). The only thing I've seen you offer is that
you've found bugs in OpenBSD by supporting old machines. Well,
one could observe, that you found the bugs because you were going
through code and only that you were going through code because of
your old machines. The latter mandates the former, but the opposite
is not as true. I'd offer that Todd and Theo and others have found
plenty of bugs by just going through code.


We're clearly off topic.
Intel's compiler isn't going to find its way into any multiplatform
OS, even if it COULD work outside of Linux.
GCC 3 isn't going to be part of OpenBSD until ... well, until
it is part of OpenBSD.

Have a nice day.
Anil Madhavapeddy
2003-07-01 01:35:03 UTC
Permalink
Post by Chuck Yerkes
Â
So rather than use regression tests, you hope that a port might trigger
problems? Computer scientology.
You're wrong. Nothing stresses a system like a good old ports bulk
build. Especially with a bit of parallel building thrown in.

I'm sure you could build an equivalent regression suite, but we simply
don't have one. Want to help?
--
Anil Madhavapeddy http://anil.recoil.org
University of Cambridge http://www.cl.cam.ac.uk
Damien Miller
2003-07-01 02:04:14 UTC
Permalink
Post by Chuck Yerkes
So rather than use regression tests, you hope that a port might trigger
problems? Computer scientology.
I eagerly await your set of regress tests that have the same coverage as a
ports build.

-d
Dave Feustel
2003-06-30 21:00:23 UTC
Permalink
Given that ICC is not open source, there is a new article at
Ace's Hardware (http://www.aceshardware.com/) that shows
some truly impressive results for icc 7.1 using SSE2 instructions
on a P4.

Dave Feustel
Marcus Watts
2003-06-30 21:19:14 UTC
Permalink
Post by Dave Feustel
Given that ICC is not open source, there is a new article at
Ace's Hardware (http://www.aceshardware.com/) that shows
some truly impressive results for icc 7.1 using SSE2 instructions
on a P4.
They go on to point out that the reason might be due to Intel releasing
insufficient information on P4 optimization and AMD releasing much
better information.

One might get the idea that Intel has gotten confused as to whether
it's using the chip to sell the compiler, or using the compiler to sell
chips.

However, the real answer seems clear: if you care about performance and
run opensource software, you want to avoid proprietary architectures
and vendors who don't want to fully describe their product.

-Marcus Watts
Chris Palmer
2003-06-30 21:48:33 UTC
Permalink
Post by Marcus Watts
However, the real answer seems clear: if you care about performance
and run opensource software, you want to avoid proprietary
architectures and vendors who don't want to fully describe their
product.
What vendors are considered good these days? AMD and...? Anyone else? I
once bought a BusLogic/Mylex SCSI card specifically because they gave
specs freely, and a Matrox video card for the same reason. I haven't had
reason to buy much more hardware for non-work reasons since then (1997),
so I'm kinda out of touch.
Jesper Louis Andersen
2003-07-01 11:02:54 UTC
Permalink
Post by Dave Feustel
Given that ICC is not open source, there is a new article at
Ace's Hardware (http://www.aceshardware.com/) that shows
some truly impressive results for icc 7.1 using SSE2 instructions
on a P4.
Unfortunately, that does not matter at all to 99% of all applications
written and neither to the kernel. If you have your MP3-player or your
FFT-decoder or your video decoder you might reap benefit from using a
compiler that can do optimized SSE2-code.

The real problem here is the way optimizations gets added to the
compiler. Some optimizations are done with tremendous speed and will
produce executables with the same performance as the expensive
optimizations but a fraction.

I cannot see why icc applies to OpenBSD in any way, but the above
mention of where SSE2 would give you benefit. Second, gcc 3.2.2 seems
rather buggy IMO not producing correct code when optmizations gets
turned on. Furthermore there is no reason as to why you should add
''unsafe[1]'' optimizations to the -O flags.

What really confuses me is that gcc 3.x got an SSA-form converter. Given
that form it should be rather easy to do a lot of optizations rather
fast (faster than use-def chains and a lot simpler). So in principle I
think it should be faster than it is.

Next point: gcc 3.x got a compile-time regression test, so they might
actually speed the compiler up again. If it at some point gets past
2.95.x on all archs there might be a possibility for adapting it.

Another compiler is TenDRA, but it lacks some backends for a couple of
archs and lacks some of the more fancy optimizations, but see above - we
might not want them if they wont give you any bang for the buck.

Lastly: If you want pure speed you should not even be looking at C. C is
rather hard to optimize totally because of aliasing of pointers. You
need to employ an alias analysis in order be able to drive a certain
optimization through. A language like OCaml or SML does not to the same
extent have this problem. Yet OCaml and SML is generally hard to use for
writing kernels, partly due to the Garbage collection these languages
employ.
--
j.
Hannah Schroeter
2003-07-10 12:46:23 UTC
Permalink
Hello!
Post by Chuck Yerkes
[...]
Frankly, I don't care how long it takes to compile usually.
I dealt with multi-day gcc bootstraps on Sparc 1's and 2's.
Emacs used to be a several hour build.
That's what other windows are for.
You don't seem to be a software developper. For me, turnaround
times are very important, and increasing the compile times by a
factor of more than 2 definitely makes me less productive (and that
factor is from a comparison of gcc 2.95 vs gcc 3.2, icc has compile
times similar to gcc 3.2). Speed of the resulting binary is less
important in the development cycle.
Post by Chuck Yerkes
How fast they RUN is, often, important (of often its not,
really. the 10/90 & 90/10 rules apply here).
[...]
Code speed may be important, but less than you'd expect. How many
people use scripting languages with rather slow interpreters?
How many people use a compiled language, but inefficiently?

Of course, best would be *both*: Fast turnarounds for development,
and then, by changing just one compiler switch, fast code, perhaps
with higher compilation times.

Kind regards,

Hannah.

John Eisenschmidt
2003-06-25 22:25:27 UTC
Permalink
If I had to guess, the OP is interested in the heavy optimization the
Intel C compiler does for the X86 architecture. They also sell a
complete replacement for Microsoft VC++ ... keep using their IDE (and
source safe if you want), but the compiler is much tighter.

The Linux version is free. My guess is that it will break quite a bit.

Rumor has it that Eduardo Augusto Alvarenga
Post by Eduardo Augusto Alvarenga
The C-Compiler by Intel is free for personal use (for LINUX). How
could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin. The article deal with the installation on FreeBSD
but I don't know more. :o)
What the heck is this Intel C-Compiler?
Has somebody installed the C-Compiler by Intel? May be somebody could
help me?
What type o C code does this package compiles? Doesn't gcc2/3 works for
you?
And: How to rebuild the COMPLETE Systeme from the source? I've an
Notebook with an Pentium4 and I would use optimized binarys (or
binaries? Don't know, I'm sorry :o) ). :o)
I suppose you're looking for the -stable branch right?
http://www.openbsd.org/stable.html
http://www.openbsd.org/de/stable.html
The secret is in the Makefile.
Best Regards,
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Eduardo A. Alvarenga - Analista de Suporte #179653
Centro Estrat�gico Integrado - SEGUP-PA
Bel�m, Par� - (91) 259-0555 / 8116-0036
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
OpenBSD Consultant: www.openbsd.org/support.html
--
John W. Eisenschmidt (***@eisenschmidt.org)
http://www.eisenschmidt.org/jweisen/pgp.html

"I liked HP, and at one time I liked Compaq, but I liked DEC better
than HP and Compaq put together." -Phil Mendlesohn (comp.os.vms)

[demime 0.98d removed an attachment of type application/pgp-signature]
Henning Brauer
2003-06-25 22:49:27 UTC
Permalink
Post by John Eisenschmidt
If I had to guess, the OP is interested in the heavy optimization the
Intel C compiler does for the X86 architecture. They also sell a
complete replacement for Microsoft VC++ ... keep using their IDE (and
source safe if you want), but the compiler is much tighter.
it's heavily optimizing and thus gaining a few % performance.
of course also breaking a larger % of application and system parts.
very questionable gain.
Post by John Eisenschmidt
The Linux version is free.
It's everything but free.
It's commercial closed source shit.
but I know what you meant, you mean "you don't have to pay for it as
long as you use it for your personal use AND restriction blah AND
restriction another blah and and and and and and and and and...)
(hmm, might need the saved money for you lawyer to explain you the
license)
--
http://2suck.net/hhwl.html - http://www.bsws.de/
Unix is very simple, but it takes a genius to understand the simplicity.
(Dennis Ritchie)
Rembrandt
2003-06-26 00:06:18 UTC
Permalink
Post by Henning Brauer
Post by John Eisenschmidt
If I had to guess, the OP is interested in the heavy optimization the
Intel C compiler does for the X86 architecture. They also sell a
complete replacement for Microsoft VC++ ... keep using their IDE (and
source safe if you want), but the compiler is much tighter.
it's heavily optimizing and thus gaining a few % performance.
of course also breaking a larger % of application and system parts.
very questionable gain.
For a notebook is every percent important. :o)
Post by Henning Brauer
Post by John Eisenschmidt
The Linux version is free.
It's everything but free.
It's commercial closed source shit.
but I know what you meant, you mean "you don't have to pay for it as
long as you use it for your personal use AND restriction blah AND
restriction another blah and and and and and and and and and...)
(hmm, might need the saved money for you lawyer to explain you the
license)
Ok...and any idea how to replace the gcc against the Intel-C-Compiler? :o)

Sebastian
Peter Hessler
2003-06-25 23:56:47 UTC
Permalink
On Thu, Jun 26, 2003 at 02:06:18AM +0200, Rembrandt wrote:
:> On Wed, Jun 25, 2003 at 06:25:27PM -0400, John Eisenschmidt wrote:
:> > If I had to guess, the OP is interested in the heavy optimization the
:> > Intel C compiler does for the X86 architecture. They also sell a
:> > complete replacement for Microsoft VC++ ... keep using their IDE (and
:> > source safe if you want), but the compiler is much tighter.
:>
:> it's heavily optimizing and thus gaining a few % performance.
:> of course also breaking a larger % of application and system parts.
:> very questionable gain.
:
:For a notebook is every percent important. :o)
:
Not true.

:> > The Linux version is free.
:>
:> It's everything but free.
:> It's commercial closed source shit.
:> but I know what you meant, you mean "you don't have to pay for it as
:> long as you use it for your personal use AND restriction blah AND
:> restriction another blah and and and and and and and and and...)
:> (hmm, might need the saved money for you lawyer to explain you the
:> license)
:
:Ok...and any idea how to replace the gcc against the Intel-C-Compiler? :o)
:
:Sebastian
:
You can't in OpenBSD. And if you did, you would be unsupported. And
if you could support yourself, you wouldn't have to ask. Sorry.
Ben Goren
2003-06-26 00:30:42 UTC
Permalink
Post by Rembrandt
it's heavily optimizing and thus gaining a few % performance.
of course also breaking a larger % of application and system
parts. very questionable gain.
For a notebook is every percent important. :o)
Let's try a thought experiment.

Let's say you could boost your laptop's performance
by...say...five percent. In reality, you won't get this much, but
we'll be generous. Let's also say that you work on this laptop
eight hours a day, five days a week, fifty weeks a year. Let's
also say that you're that impossible person who works faster than
the computer and that you're always--even when typing--waiting on
the computer. And that you never stop working, even for a
microsecond. Over the course of the year, that extra five percent
will save you a hundred hours. In reality, you'll probably
only spend a very small portion of your day waiting on the
computer. We'll be extra generous and say that you spend ten
percent of your time doing things where you're waiting on the
computer. Your hundred saved hours per year is now ten.

Now, wanna guess how many hours it'd take Theo to do something
like this? More than ten. Waaaaaay more than ten. More than a
hundred. Waaaaay more than a hundred. It'd also take a lot of
other things that ain't gonna happen, but that's beside the point.

So, if the person who knows OpenBSD better than anybody else on
the planet would have to spend significantly more time doing this
insane thing than could possibly saved by doing it...what on Earth
makes you think it's worth doing?

Oh--all that time? That's just for a first pass at it. It'd take
years before the code would stabilize. The first time you hit a
bug that takes you a day to figure out, you've wiped out all your
speed advantage.

Another thought experiment: instead of spending that hundred hours
wasting time porting the operating system to a new compiler, get
an entry-level tech support job for $10/hour. After a hundred
hours of work, you've got enough to buy yourself a pretty nifty
new laptop--one that'll be twice as fast as my luxirously-fast
desktop.
Post by Rembrandt
Ok...and any idea how to replace the gcc against the
Intel-C-Compiler? :o)
Yes. Put up a...say...$10,000,000.00 bond for the first person who
coughs up the code, no strings attached, no long-term maintenance
committment. I bet Theo'd do it then...but I don't think I'd bet
more than lunch. Okay, dinner.

Or, on a more realistic note, convince Intel to release their
compiler under the BSD license. That'd really make Theo sit
up and take notice. Tightly-optimizing compiler without GPL
(or other) encumberances? One can dream....

Cheers,

b&

--
Ben Goren
mailto:***@trumpetpower.com
http://www.trumpetpower.com/
icbm:33o25'37"N_111o57'32"W

[demime 0.98d removed an attachment of type application/pgp-signature]
Monty Brandenberg
2003-06-26 01:10:15 UTC
Permalink
Post by Ben Goren
Post by Rembrandt
For a notebook is every percent important. :o)
Let's try a thought experiment.
Let's say you could boost your laptop's performance
by...say...five percent.
Another comparison... Wait six weeks and Moore's Law will give
you an extra 5% just for waiting.

--
Monty Brandenberg
jared r r spiegel
2003-06-26 07:54:33 UTC
Permalink
Post by Ben Goren
Post by Rembrandt
it's heavily optimizing and thus gaining a few % performance.
of course also breaking a larger % of application and system
parts. very questionable gain.
For a notebook is every percent important. :o)
Let's say you could boost your laptop's performance
by...say...five percent.
don't get me wrong, i'm all about trying to get extra performance
where it's needed -- several ! "todayfast" PCs under my roof -- but
didn't you mention this laptop is a p4/somethin'GHz.. ??

i've run an /etc/mk.conf with
CFLAGS+=-O3 -march=i586 -mcpu=k5

and another with
CFLAGS+=-O3 -march=i686

and ran systems 'make build'ed with those without gaping weirdness..

a few goofy things happened ( namely freebsd-opera FIN flooding
my gateway ( the 686 ) at random times and whacking the load on
the gateway to ~24 )....

one of my first thoughts as possible causes were those cflags, so
i'm back to just specifying the -march now.

and there's no way in the universe i'd post up about unexpected
problems/issues on those machines with that configuration...

one is probably better off trying to optimize a single port,
when needed ( eg bochs ), for the performance gain, rather than
the entire underlying system.

and if i really didn't see a whopping load of "brand new breathing
room" on a k6-2/500 with optimized whatnot, i doubt there's
much of anything to be seen on a _pentium four_, laptop or not.

sebastian, perhaps jack your BUFCACHEPERCENT( config(8), options(4) ),
use an mfs /tmp, and
let the rest lie. it's unlikely to really pay dividends for you
to sweat off the instruction or two with a higher optimization...?

ben's analogy about the 'saved hours per year' reminds me of something
i go over with people at least once every workday:

in w9x dialup land, people _love_ to put that little checkmark in
the dialup connection dialogue labelled 'connect automatically'.
most people, it seems, want to get the most out of their internet
lifestyle; some feel that that checkbox will assist them.

A) DUN in 9x/ME *breaks*, just like TCP/IP *breaks*; frequently
through no fault of the user. if it weren't for the popular
MS family of OSes, i submit that the entire idea of something
on the PC just *poof*: "getting corrupted", would be foreign
to the masses, but on the contrary, the concept that "computers
just get corrupted" is frequently gobbled up like yummy stew.
putting the 'connect automatically', in my
experience, shortens the lifespan of DUN // makes it "break"
sooner.

B) *they are saving _ONE_ mouse click* (per connect). oh enn ee.
1. one does not even need to have read anything by james gleick
to be able to do the math and see that that does *not* add up
to a free day at the end of the year...

C) see last sentence of A

jared
Miod Vallat
2003-06-26 08:52:25 UTC
Permalink
Post by jared r r spiegel
don't get me wrong, i'm all about trying to get extra performance
where it's needed -- several ! "todayfast" PCs under my roof -- but
didn't you mention this laptop is a p4/somethin'GHz.. ??
i've run an /etc/mk.conf with
CFLAGS+=-O3 -march=i586 -mcpu=k5
and another with
CFLAGS+=-O3 -march=i686
and ran systems 'make build'ed with those without gaping weirdness..
If it works for you, fine. But did you know that your kernel, even with
such a mk.conf, is not compiled that way, because the kernel makefile
contains workarounds to nnullify these options?

The reason behind this is that the OpenBSD kernel is a known piece of
code that gets miscompiled by gcc 2.95 with aggressive optimisation
flags like yours.

Since the kernel is a much smaller amount of code than the rest of the
system and all your third-party applications, would you keep trusting
those flags for your whole system?

Miod
Jesper Louis Andersen
2003-06-26 09:56:07 UTC
Permalink
Post by Miod Vallat
Post by jared r r spiegel
don't get me wrong, i'm all about trying to get extra performance
where it's needed -- several ! "todayfast" PCs under my roof -- but
didn't you mention this laptop is a p4/somethin'GHz.. ??
i've run an /etc/mk.conf with
CFLAGS+=-O3 -march=i586 -mcpu=k5
and another with
CFLAGS+=-O3 -march=i686
and ran systems 'make build'ed with those without gaping weirdness..
If it works for you, fine. But did you know that your kernel, even with
such a mk.conf, is not compiled that way, because the kernel makefile
contains workarounds to nnullify these options?
[...]

Added note: -O3 turns on rather agressive optimizations. Some of these
could actually be degrading your performance, since GCC does not solely
use ''safe'' optimizations when using this flag.

One problem, too many compilers have it, is the alteration of code compiled
with optimization so the semantics have changed. This is not the
original intention of optimization. The program should be transformed -
but to an equivalent one (semantics wise).

It could be nice to have TenDRA inside the 5% bound of GCC, but from
there the added complexity will almost surely result in almost no gain
performancewise and a considerable addition in compilation time.
Selective optimization choices would really reap a benefit here.
--
j.
jared r r spiegel
2003-06-26 15:13:22 UTC
Permalink
Post by Miod Vallat
Post by jared r r spiegel
i've run an /etc/mk.conf with
CFLAGS+=-O3 -march=i586 -mcpu=k5
and another with
CFLAGS+=-O3 -march=i686
and ran systems 'make build'ed with those without gaping weirdness..
If it works for you, fine. But did you know that your kernel, even with
such a mk.conf, is not compiled that way, because the kernel makefile
contains workarounds to nnullify these options?
yeah. i forgot to mention about the Makefile i'd edit after running the
config KERNELNAME. to change the -march=i486 and the -O2.

i was captain typo last night, as it should've been '-mcpu=k6'
Post by Miod Vallat
Since the kernel is a much smaller amount of code than the rest of the
system and all your third-party applications, would you keep trusting
those flags for your whole system?
like i said; at first sign of weirdness, i redid things with no such
aggressive flags.

that's what i was trying to say with it: yes, i did do kernel/userland
cflags optimizations; yes, it all did seem to run sanely; yes, something
weird has happened; yes, the first thing i did was tone down the
-O? whatnot.

jared
Chuck Yerkes
2003-06-26 01:00:14 UTC
Permalink
Post by Rembrandt
Post by Henning Brauer
Post by John Eisenschmidt
If I had to guess, the OP is interested in the heavy optimization the
Intel C compiler does for the X86 architecture. They also sell a
complete replacement for Microsoft VC++ ... keep using their IDE (and
source safe if you want), but the compiler is much tighter.
it's heavily optimizing and thus gaining a few % performance.
of course also breaking a larger % of application and system parts.
very questionable gain.
For a notebook is every percent important. :o)
So you're using a SCSI PCMCIA card to attach it to your 15K RPM
SCSI disks - cause MY laptop spends a fair amount of time in IO
waiting for that 4200 RPM laptop disk.

Or you could wait 3 months and get several %% improvements.
Or a year and processors will go up by 50% and overall performance
by 10-15%. (RAM isn't getting faster, disk IO isn't. CPU is).

...
Post by Rembrandt
Ok...and any idea how to replace the gcc against the Intel-C-Compiler? :o)
Sure, have at it. Let us know. Let us know when you've gotten
Intel to let it go under a BSD license ("do with it what you will
just don't claim it as yours").

Me? I'd rather port the BSD userland to a linux kernel to get
Stallman to shut the f*** up about GNU/Linux. Better waste of time
with a result that benefits more people.
Philipp Buehler
2003-06-27 06:01:50 UTC
Permalink
Post by Chuck Yerkes
Post by Rembrandt
For a notebook is every percent important. :o)
So you're using a SCSI PCMCIA card to attach it to your 15K RPM
SCSI disks - cause MY laptop spends a fair amount of time in IO
waiting for that 4200 RPM laptop disk.
*crawls back from the floor*

Yeah, right!

Chuck: perfect selfLART

ciao
(say cardbus, at least..)
--
pb - Philipp Buehler
http://fips.de/support-me.html
STeve Andre'
2003-06-26 00:14:20 UTC
Permalink
Sebastian, if you want to try using another compiler with
OpenBSD, I would suggest trying out the TenDRA C Compiler,
available at www.tendra.org.

It would be far more useful to everyone if some playing around
with that produced any results. The Intel compiler is free only on
the surface of things. Remember that and avoid it.

--STeve Andre'
Post by Rembrandt
The C-Compiler by Intel is free for personal use (for LINUX).
How could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin.
The article deal with the installation on FreeBSD but I don't know more. :o)
Has somebody installed the C-Compiler by Intel?
May be somebody could help me?
And: How to rebuild the COMPLETE Systeme from the source?
I've an Notebook with an Pentium4 and I would use optimized binarys (or
binaries? Don't know, I'm sorry :o) ). :o)
Sebastian
Brad
2003-06-26 00:45:41 UTC
Permalink
Besides pointing out all the other crap that has been brought up so far,
the bigger overriding technical issue here is this compiler is for Linux/i386
and would be outputting Linux/i386 ELF binaries and not OpenBSD/i386 ELF
binaries so what good is it?

// Brad
Post by STeve Andre'
Sebastian, if you want to try using another compiler with
OpenBSD, I would suggest trying out the TenDRA C Compiler,
available at www.tendra.org.
It would be far more useful to everyone if some playing around
with that produced any results. The Intel compiler is free only on
the surface of things. Remember that and avoid it.
--STeve Andre'
Post by Rembrandt
The C-Compiler by Intel is free for personal use (for LINUX).
How could I use this compiler with OpenBSD? I remembered an article in an
german PC-Magazin.
The article deal with the installation on FreeBSD but I don't know more. :o)
Has somebody installed the C-Compiler by Intel?
May be somebody could help me?
And: How to rebuild the COMPLETE Systeme from the source?
I've an Notebook with an Pentium4 and I would use optimized binarys (or
binaries? Don't know, I'm sorry :o) ). :o)
Sebastian
Christian Weisgerber
2003-06-26 11:06:09 UTC
Permalink
Post by Brad
Besides pointing out all the other crap that has been brought up so far,
the bigger overriding technical issue here is this compiler is for Linux/i386
and would be outputting Linux/i386 ELF binaries and not OpenBSD/i386 ELF
binaries so what good is it?
Actually, FreeBSD has a port of icc that they rigged to produce
native FreeBSD/i386 executables, just like their ccc port produces
native FreeBSD/alpha ones.
--
Christian "naddy" Weisgerber ***@mips.inka.de
Loading...