Since our request/reply is simple, maybe it is not that HEAVY.Chris McKillop <email@example.com> wrote:
the case of multiple machines with the same named resource. Perhaps using
an open standard like LDAP or some other open scheme.
I hadn't thought of LDAP. That's an interesting idea, but is also pretty
heavy in terms of solutions, especially when working with small embedded systems.
For the larger systems that might have LDAP for other reasons, it's a perfect
I am actually investigating this one. So the "service look up" actually
sends out a LDAP message, and we will have a tiny LDAP server to answer
just those request.
Exactly. A "service" could simply be a symlink to remote node. Thus, useAnother thing I found light-years ahead of QNX4 was the push to use resmgrs
so that all applications can use the POSIX APIs to communicate to the
managers over the network.
Using a global name is pretty much useless if you want to use perl to talk
to your service.
Well, this depends. If you use name_attach() and name_locate(), then you
could have a resmgr manage the /var/net space and perform a redirect/symlink
type service to point to the actual resource in /net/machine/... then you COULD.
Mind you, there isn't any reason why you can't have a resmgr sit "on top" of /var/net
and examine those requests to determine if it needed to let them pass-thru or
be redirected, somewhat like fs-pkg does already.
a "service" without knowing it's server node, is very important. Think
you take an iPaq to company, and just ask "where is tcpip service", and
"SOCK=<tcpip service> voyager", off you go! You don't actually care who
IS providing tcpip
This got to be a set of APIs, daemons. Like a system that have daemonsThere is a related topic (cdm: you knew this was coming), which is remote spawn,
which is, IMO the other requirement of being able to actually create distributed
Right now remote spawn *IS* possible, but it needs to be wrapped up in an API call
of some sort to move it out of the realm of "guru black magic".
exist in each node of a distirbute network, keep track of each others
CPU usage, and "spawn" process to those spare nodes. And better, with
support of HAT, we could sort of doing remote "fork()", to push a
job from busy nodes to spare nodes
You running mozilla, everytime you click a link, the actually process
fly from one node to another
The PVM is actually something like that, except they can only decided
which node to start a job.