[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Cluster Stuff




To Sean: Welcome!  We're taking any Pentium-class computers running 
Linux.  If you're still interested, let me know.

I have a working cluster (with all of two nodes) running PVMPOV.  Here's 
an example of the benchmarks I've been getting: (both are P133s)

POVRAY:
Time For Parse:    0 hours  0 minutes   2.0 seconds (2 seconds)
Time For Trace:    0 hours  8 minutes  43.0 seconds (523 seconds)
    Total Time:    0 hours  8 minutes  45.0 seconds (525 seconds)
519.87user 1.26system 8:47.14elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (452major+550minor)pagefaults 0swaps

PVMPOV (2 nodes):
Time For Trace:    0 hours  4 minutes  36.0 seconds (276 seconds)
    Total Time:    0 hours  4 minutes  36.0 seconds (276 seconds)
2.10user 1.63system 4:41.66elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (408major+789minor)pagefaults 41swaps

(These stats, by the way, use the official POVRAY benchmark, so you can 
go to www.povray.org, click Resources, and go to the Benckmark page to 
compare how we're doing.)

There are only a few requirements to entering the cluster:

 - You must be able to reconfig your Ethernet to the 192.168.50.[40+] IP
address range.  You can do this either with IP aliasing or just changing
your address using ifconfig.  If you don't know how to do this, I can do
it for you at the show.  If you don't have an Ethernet card, talk to 
Steve immediately!
 - You must be able to NFS-mount two directories from my machine: 
/usr/share/pvm3 and /usr/lib/povray3.  The paths you mount to must be the
identical paths.  If you have a recent (3.0+) version of POVRAY installed,
you don't need to mount the latter, but EVERYONE will need to mount the
former! 
 - You must have a user account on your machine with access to the two 
directories above, which also will allow the user "jeff" on my server 
(server1, 192.168.50.3) to rsh in.  The easiest way to do this is create 
a user "jeff" on your system, create an .rhosts for it, and set your 
inetd.conf to enable rshd.  If you don't know how to do this, I will be 
able to at the show.  After the show, you can delete this account if you 
like.  I have actually had more problems with the software running as 
root, so I prefer not to use it.

Currently, we're running glibc-based binaries, but I intend to get a 
libc5 system working as well.  When that happens, I will let libc5 people 
know if there are any other requirements.  Obviously, at this point, 
glibc is preferred; however, don't upgrade unless you were planning to 
before now.

The earlier you get to the show, the better.  I will be there between 
6:30 and 7 am to start setting up.  If you must be later, we can add you 
to the cluster on the fly, but you won't be able to contribute to the 
early demos.

The results of the official benchmark make me think that we should pick a
time to hype up an attempt to get a good score for the official page. 
Depending on the nodes contributed, we stand a good chance to get into the
top five of best-performing entries.  Let me know what you think.  
Obviously, we'd want all contributors to have their nodes up and running 
for this, so I was thinking early afternoon for this.

Sorry for the long-winded post.  Let me know if you have any feedback, or 
if you can contribute a node (if you haven't already).


--
To unsubscribe, send email to majordomo@luci.org with
"unsubscribe luci-discuss" in the body.