Article:Talking to hardware under QNX Neutrino

bridged with qdn.public.articles
Debbie Kane

Article:Talking to hardware under QNX Neutrino

Post by Debbie Kane » Wed Nov 22, 2000 2:11 pm

Talking to hardware under QNX Neutrino
By Dave Donohoe, QNX Software Systems Ltd.
http://support.qnx.com/support/articles ... dware.html

If you've ever tried to develop a device driver under a traditional Unix
operating system, you're sure to feel spoiled when developing hardware-level
code for QNX Neutrino.

Thanks to Neutrino's microkernel architecture, writing device drivers is
like writing any other program. Only core OS services reside in "kernel"
address space -- everything else, including device drivers, reside in
"process" or "user" address space. The impact of this is that a device
driver has all the services that are available to "regular" applications.

Many models are available to driver developers under Neutrino. Generally,
the type of driver you're writing will determine the driver model you'll
follow. For example, graphics drivers follow one particular model, which
allows them to plug into the Photon graphics subsystem, whereas network
drivers follow a different model, and so on.

On the other hand, depending on the type of device you're targeting, it may
not make sense to follow any existing driver model at all.

In this article, we'll focus on the low-level details of accessing and
controlling device-level hardware, which are common to all types of device
drivers.

Probing the hardware

If you're targeting a "closed" embedded system with a fixed set of hardware,
your driver may be able to assume that the hardware it's going to control is
present in the system and is configured in a certain way.

But if you're targeting more generic systems, such as the desktop PC, you
want to first determine whether the device is present. Then you need to
figure out how the device is configured (e.g. what memory ranges and
interrupt level belong to the device).

For some devices, there's a standard mechanism for determining
configuration. Devices that interface to the PCI bus have such a mechanism.
Each PCI device has a unique "vendor" and "device" ID assigned to it.

The following piece of code demonstrates how, for a given PCI device, to
determine whether the device is present in the system and what resources
have been assigned to it:

#include
#include
#include

main()
{
struct pci_dev_info info;
void *hdl;
int i;

memset(&info, 0, sizeof (info));

if (pci_attach(0) < 0) {
perror("pci_attach");
exit(EXIT_FAILURE);
}

/*
* Fill in the Vendor and Device ID for a 3dfx VooDoo3
* graphics adapter.
*/
info.VendorId = 0x121a;
info.DeviceId = 5;

if ((hdl = pci_attach_device(0,
PCI_SHARE|PCI_INIT_ALL, 0, &info)) == 0) {
perror("pci_attach_device");
exit(EXIT_FAILURE);
}

for (i = 0; i < 6; i++) {
if (info.BaseAddressSize > 0)
printf("Aperture %d: "
"Base 0x%llx Length %d bytes Type %s\n", i,
PCI_IS_MEM(info.CpuBaseAddress) ?
PCI_MEM_ADDR(info.CpuBaseAddress) :
PCI_IO_ADDR(info.CpuBaseAddress),
info.BaseAddressSize,
PCI_IS_MEM(info.CpuBaseAddress) ? "MEM" : "IO");
}

printf("IRQ 0x%x\n", info.Irq);

pci_detach_device(hdl);
}

Different buses have different mechanisms for determining which resources
have been assigned to the device. On some buses, such as the ISA bus,
there's no such mechanism. How do you determine whether an ISA device is
present in the system and how it's configured? The answer is card-dependent
(with the exception of "PnP" ISA devices).

Accessing the hardware

Once you've determined what resources have been assigned to the device,
you're now ready to start communicating with the hardware. How you do this
depends on the resources.

1. I/O resources
Before a thread may attempt any port I/O operations, it must be running at
the correct privilege level. The following call will ensure that the thread
is permitted to access I/O ports: ThreadCtl(_NTO_TCTL_IO, 0);

Without this call, you'll get a protection fault upon attempted I/O
operations.

Next you need to map the I/O base address (one of the addresses returned in
the CpuBaseAddress array of the info structure above). For example:

uintptr_t iobase;

iobase = mmap_device_io(info.BaseAddressSize[2],
info.CpuBaseAddress[2]);

Now you may perform port I/O, using functions such as in8(), in32(), out8(),
etc, adding the register index to iobase to address a specific register:

out32(iobase + SHUTDOWN_REGISTER, 0xdeadbeef);

Note that the call to mmap_device_io() isn't necessary on x86 systems, but
it's still a good idea to include it for the sake portability. In the case
of some legacy x86 hardware, it may not make sense to call mmap_device_io().
For example, a VGA-compatible device has I/O ports at well-known, fixed
locations (e.g. 0x3c0, 0x3d4, 0x3d5) with no concept of an I/O base as such.
You could access the VGA controller, for example, as follows:

out8(0x3d4, 0x11);
out8(0x3d5, in8(0x3d5) & ~0x80);

2. Memory-mapped resources
For some devices, registers are accessed via regular memory operations. To
gain access to a device's registers, you need to map them to a pointer in
the driver's virtual address space. This can be done by calling
mmap_device_memory().

volatile uint32_t *regbase; /* device has 32-bit registers */

regbase = mmap_device_memory(NULL, info.BaseAddressSize[0],
PROT_READ|PROT_WRITE|PROT_NOCACHE, 0,
info.CpuBaseAddress[0]);

Note that we specified the PROT_NOCACHE flag. This ensures that the CPU
won't defer or omit read/write cycles to the device's registers, nor will it
deliver the reads or writes to the registers in a different order than the
driver issued them.

Note also the use of the volatile keyword. This prevents the compiler from
"optimizing out" accesses to the devices registers.

Now you may access the device's memory using the regbase pointer. For
example:

regbase[SHUTDOWN_REGISTER] = 0xdeadbeef;

3. IRQs
You can attach an interrupt handler to the device by calling either
InterruptAttach() or InterruptAttachEvent(). For example:

InterruptAttach(_NTO_INTR_CLASS_EXTERNAL | info.Irq,
handler, NULL, 0, _NTO_INTR_FLAGS_END);

The driver should call ThreadCtl(_NTO_TCTL_IO, 0); before attaching an
interrupt.

The essential difference between InterruptAttach() and
InterruptAttachEvent() is the way in which the driver is notified that the
device has triggered an interrupt.

With InterruptAttach(), the driver's "handler" function is called directly
by the kernel. Since it's running in kernel space, the hander is severely
restricted in what it can do. From within this handler, it isn't safe to
call most of the C library functions. Also, if you spend too much time in
the handler, other processes and interrupt handlers of a lower or equal
priority won't be able to run. Doing too much work in the interrupt handler
can negatively affect the system's realtime responsiveness.

We recommend that you do the bare minimum within the handler and return an
event to be delivered to the driver at process level. Then, the rest of the
work associated with handling the interrupt can be completed at process time
and will be carried out at the driver's normal priority.

Typically, a driver would simply acknowledge the interrupt at the hardware
level within its interrupt handler and would return an event in order to
wake up the driver thread that will perform the remainder of the processing.

It's often possible to do all the interrupt handling at the process level.
In this case, you should call InterruptAttachEvent(). When the driver
triggers an interrupt, the kernel will automatically deliver an event to the
driver.

Before attempting to implement an interrupt handler, you should read the
online documentation very carefully. (See "Writing an Interrupt Handler" in
the QNX Neutrino Programmer's Guide.)

Now you should be ready to start programming the device's registers. Happy
bit-twiddling! ;-)

Eric Berdahl

Re: Article:Talking to hardware under QNX Neutrino

Post by Eric Berdahl » Wed Nov 22, 2000 8:44 pm

In article <8vgjsj$q6j$1@inn.qnx.com>, "Debbie Kane" <debbie@qnx.com>
wrote:
2. Memory-mapped resources
For some devices, registers are accessed via regular memory operations.
To
gain access to a device's registers, you need to map them to a pointer in
the driver's virtual address space. This can be done by calling
mmap_device_memory().

volatile uint32_t *regbase; /* device has 32-bit registers */

regbase = mmap_device_memory(NULL, info.BaseAddressSize[0],
PROT_READ|PROT_WRITE|PROT_NOCACHE, 0,
info.CpuBaseAddress[0]);

Note that we specified the PROT_NOCACHE flag. This ensures that the CPU
won't defer or omit read/write cycles to the device's registers, nor will
it
deliver the reads or writes to the registers in a different order than
the
driver issued them.
This is good for allowing a driver to access its device's registers, but
what about something like a video card's frame buffer? The difference is
that it is a common practice to memory map the frame buffer into the
client's address space so the client application can render directly
into the frame buffer. On other operating systems (e.g. linux), this can
be done by mmap'ing the device:

fd = open("/dev/mypcicard", O_RDWR);
base = mmap(NULL, myFrameBufferSize,
PROT_NOCACHE | PROT_READ | PROT_WRITE,
0, fd, 0 /* start of frame buffer */);

If I were writing a driver for a PCI multimedia card with a frame
buffer, how would I map the frame buffer into the client application's
address space? Would I advise the client to use mmap? If so, how do I
implement the io_mmap hook in my driver (I can't find any docs on this)?
If not, what method is preferred under Neutrino?

Suggestions, hints, and pointers all accepted.

Thanks in advance,
Eric

David Donohoe

Re: Article:Talking to hardware under QNX Neutrino

Post by David Donohoe » Thu Nov 23, 2000 1:22 am

Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vgjsj$q6j$1@inn.qnx.com>, "Debbie Kane" <debbie@qnx.com
wrote:

2. Memory-mapped resources
For some devices, registers are accessed via regular memory operations.
To
gain access to a device's registers, you need to map them to a pointer in
the driver's virtual address space. This can be done by calling
mmap_device_memory().

volatile uint32_t *regbase; /* device has 32-bit registers */

regbase = mmap_device_memory(NULL, info.BaseAddressSize[0],
PROT_READ|PROT_WRITE|PROT_NOCACHE, 0,
info.CpuBaseAddress[0]);

Note that we specified the PROT_NOCACHE flag. This ensures that the CPU
won't defer or omit read/write cycles to the device's registers, nor will
it
deliver the reads or writes to the registers in a different order than
the
driver issued them.

This is good for allowing a driver to access its device's registers, but
what about something like a video card's frame buffer? The difference is
that it is a common practice to memory map the frame buffer into the
client's address space so the client application can render directly
into the frame buffer. On other operating systems (e.g. linux), this can
be done by mmap'ing the device:

fd = open("/dev/mypcicard", O_RDWR);
base = mmap(NULL, myFrameBufferSize,
PROT_NOCACHE | PROT_READ | PROT_WRITE,
0, fd, 0 /* start of frame buffer */);

If I were writing a driver for a PCI multimedia card with a frame
buffer, how would I map the frame buffer into the client application's
address space? Would I advise the client to use mmap? If so, how do I
implement the io_mmap hook in my driver (I can't find any docs on this)?
If not, what method is preferred under Neutrino?
The call to mmap_device memory above, would also work for a frame buffer.
The driver would simply supply the physical address and size of the frame
buffer, instead of the resister bank's physical address and size.
This would return a pointer to the frame buffer within drivers address
space.

However, you could also have the driver allow an application to
access the frame buffer, by creating a shared memory object with
the appropriate permissions. See the shm_open() docs.

This would create a shared memory object under /dev/shmem, which
the app could access using shm_open() and mmap().

The trick is to make the shared memory object overlay the physical
memory of the frame buffer. This can be achieved by calling
shm_ctl() and passing the SHMCTL_PHYS flag.
Suggestions, hints, and pointers all accepted.

Thanks in advance,
Eric

Eric Berdahl

Re: Article:Talking to hardware under QNX Neutrino

Post by Eric Berdahl » Thu Nov 23, 2000 5:39 am

In article <8vhrfv$4sq$1@nntp.qnx.com>, David Donohoe
<ddonohoe@qnx.com> wrote:
The call to mmap_device memory above, would also work for a frame buffer.
The driver would simply supply the physical address and size of the frame
buffer, instead of the resister bank's physical address and size.
This would return a pointer to the frame buffer within drivers address
space.
Right. This is how the driver itself is currently accessing the frame
buffer. The trick, as you note, is to allow a client application to
access the frame buffer.
However, you could also have the driver allow an application to
access the frame buffer, by creating a shared memory object with
the appropriate permissions. See the shm_open() docs.

This would create a shared memory object under /dev/shmem, which
the app could access using shm_open() and mmap().

The trick is to make the shared memory object overlay the physical
memory of the frame buffer. This can be achieved by calling
shm_ctl() and passing the SHMCTL_PHYS flag.
I ran across this in the docs. I figured this was the way to go.
However, I cannot get shm_ctl to succeed. Every time I call it, I get an
error (ENOSYS) from the call. Is shm_ctl not implemented in Neutrino?

From what I've seen in the docs

shm_ctl(fd, SHMCTL_ANON|SHMCTL_PHYS, frameBufferPhysical,
frameBufferSize);

is the way to go. Unfortunately, every time I call the routine, it
returns an error (errno=ENOSYS).

Any idea what I'm missing?

David Donohoe

Re: Article:Talking to hardware under QNX Neutrino

Post by David Donohoe » Thu Nov 23, 2000 3:39 pm

Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vhrfv$4sq$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:

The call to mmap_device memory above, would also work for a frame buffer.
The driver would simply supply the physical address and size of the frame
buffer, instead of the resister bank's physical address and size.
This would return a pointer to the frame buffer within drivers address
space.

Right. This is how the driver itself is currently accessing the frame
buffer. The trick, as you note, is to allow a client application to
access the frame buffer.

However, you could also have the driver allow an application to
access the frame buffer, by creating a shared memory object with
the appropriate permissions. See the shm_open() docs.

This would create a shared memory object under /dev/shmem, which
the app could access using shm_open() and mmap().

The trick is to make the shared memory object overlay the physical
memory of the frame buffer. This can be achieved by calling
shm_ctl() and passing the SHMCTL_PHYS flag.

I ran across this in the docs. I figured this was the way to go.
However, I cannot get shm_ctl to succeed. Every time I call it, I get an
error (ENOSYS) from the call. Is shm_ctl not implemented in Neutrino?

From what I've seen in the docs

shm_ctl(fd, SHMCTL_ANON|SHMCTL_PHYS, frameBufferPhysical,
frameBufferSize);
You should omit the SHMCTL_ANON flag. This flag means that pages
of "anonymous" memory will be allocated and assigned to the
shared memory object. Without this flag, the memory assigned
to the object will be the memory specified by the frameBufferPhysical
argument.

Also, I assume "fd" was initialized by a call to shm_open. Make
sure you pass the O_CREAT and O_RDWR flags to shm_open.
is the way to go. Unfortunately, every time I call the routine, it
returns an error (errno=ENOSYS).

Any idea what I'm missing?

Eric Berdahl

Re: Article:Talking to hardware under QNX Neutrino

Post by Eric Berdahl » Thu Nov 23, 2000 8:56 pm

In article <8vjdmn$2to$1@nntp.qnx.com>, David Donohoe
<ddonohoe@qnx.com> wrote:
Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vhrfv$4sq$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:
From what I've seen in the docs

shm_ctl(fd, SHMCTL_ANON|SHMCTL_PHYS, frameBufferPhysical,
frameBufferSize);

You should omit the SHMCTL_ANON flag. This flag means that pages
of "anonymous" memory will be allocated and assigned to the
shared memory object. Without this flag, the memory assigned
to the object will be the memory specified by the frameBufferPhysical
argument.

Also, I assume "fd" was initialized by a call to shm_open. Make
sure you pass the O_CREAT and O_RDWR flags to shm_open.
Here's the code I use:

fd = shm_open("/quackfb", O_RDWR | O_CREAT, 0777);
if (-1 == fd)
perror("shm_open");

if (-1 == ftruncate(fd, theSize))
perror("ftruncate");

if (-1 == shm_ctl(fd, SHMCTL_PHYS, thePhysicalAddr, theSize))
perror("shm_ctl");

When I run this from my driver, I get an error from shm_ctl
(errno=ENOSYS).

So, I'm still stuck. It appears as if shm_ctl is not implemented in
Neutrino. Am I missing something, or is there another mechanism I should
be using to map my card into a client's address space?

Thanks in advance,
Eric

David Donohoe

Re: Article:Talking to hardware under QNX Neutrino

Post by David Donohoe » Thu Nov 23, 2000 9:49 pm

Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vjdmn$2to$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:

Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vhrfv$4sq$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:
From what I've seen in the docs

shm_ctl(fd, SHMCTL_ANON|SHMCTL_PHYS, frameBufferPhysical,
frameBufferSize);

You should omit the SHMCTL_ANON flag. This flag means that pages
of "anonymous" memory will be allocated and assigned to the
shared memory object. Without this flag, the memory assigned
to the object will be the memory specified by the frameBufferPhysical
argument.

Also, I assume "fd" was initialized by a call to shm_open. Make
sure you pass the O_CREAT and O_RDWR flags to shm_open.

Here's the code I use:

fd = shm_open("/quackfb", O_RDWR | O_CREAT, 0777);
if (-1 == fd)
perror("shm_open");

if (-1 == ftruncate(fd, theSize))
perror("ftruncate");

if (-1 == shm_ctl(fd, SHMCTL_PHYS, thePhysicalAddr, theSize))
perror("shm_ctl");

When I run this from my driver, I get an error from shm_ctl
(errno=ENOSYS).

So, I'm still stuck. It appears as if shm_ctl is not implemented in
Neutrino. Am I missing something, or is there another mechanism I should
be using to map my card into a client's address space?
You're not running it as root, are you ;-)

Non-privilidged apps are not allowed access to raw physical memory.

I think the error codes returned by this function are bogus.
No matter what the error is, it seems to return "Function not implemented".
It really should have said "Permission denied".

You should also omit the call to ftruncate, above.

Also, note that you can only do the shm_ctl once in the lifetime of the
object. If you need to modify/grow/shrink the object, you will need
to shm_unlink it, and re-create it.

Steven Dufresne

Re: Article:Talking to hardware under QNX Neutrino

Post by Steven Dufresne » Fri Nov 24, 2000 4:03 pm

David Donohoe <ddonohoe@qnx.com> wrote:
Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vjdmn$2to$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:

Eric Berdahl <berdahl@intelligentparadigm.com> wrote:
In article <8vhrfv$4sq$1@nntp.qnx.com>, David Donohoe
ddonohoe@qnx.com> wrote:
From what I've seen in the docs

shm_ctl(fd, SHMCTL_ANON|SHMCTL_PHYS, frameBufferPhysical,
frameBufferSize);

You should omit the SHMCTL_ANON flag. This flag means that pages
of "anonymous" memory will be allocated and assigned to the
shared memory object. Without this flag, the memory assigned
to the object will be the memory specified by the frameBufferPhysical
argument.

Also, I assume "fd" was initialized by a call to shm_open. Make
sure you pass the O_CREAT and O_RDWR flags to shm_open.

Here's the code I use:

fd = shm_open("/quackfb", O_RDWR | O_CREAT, 0777);
if (-1 == fd)
perror("shm_open");

if (-1 == ftruncate(fd, theSize))
perror("ftruncate");

if (-1 == shm_ctl(fd, SHMCTL_PHYS, thePhysicalAddr, theSize))
perror("shm_ctl");

When I run this from my driver, I get an error from shm_ctl
(errno=ENOSYS).

So, I'm still stuck. It appears as if shm_ctl is not implemented in
Neutrino. Am I missing something, or is there another mechanism I should
be using to map my card into a client's address space?

You're not running it as root, are you ;-)

Non-privilidged apps are not allowed access to raw physical memory.

I think the error codes returned by this function are bogus.
No matter what the error is, it seems to return "Function not implemented".
It really should have said "Permission denied".

You should also omit the call to ftruncate, above.

Also, note that you can only do the shm_ctl once in the lifetime of the
object. If you need to modify/grow/shrink the object, you will need
to shm_unlink it, and re-create it.
It also looks like the size and lengths passed to shm_ctl() must be
even multiples of the page size (sysconf(_SC_PAGE_SIZE)).

Warren Peece

Re: Article:Talking to hardware under QNX Neutrino

Post by Warren Peece » Fri Nov 24, 2000 4:24 pm

"Steven Dufresne" <stevend@qnx.com> wrote in message
news:8vm3gm$jl3$1@nntp.qnx.com...

| It also looks like the size and lengths passed to shm_ctl() must be
| even multiples of the page size (sysconf(_SC_PAGE_SIZE)).

What happens if the size and length parameters are not multiples of the page
size? Are they rounded up or do you get an error?

-Warren "Too Lazy To Test It" Peece

Steven Dufresne

Re: Article:Talking to hardware under QNX Neutrino

Post by Steven Dufresne » Fri Nov 24, 2000 10:06 pm

Warren Peece <warren@nospam.com> wrote:
"Steven Dufresne" <stevend@qnx.com> wrote in message
news:8vm3gm$jl3$1@nntp.qnx.com...

| It also looks like the size and lengths passed to shm_ctl() must be
| even multiples of the page size (sysconf(_SC_PAGE_SIZE)).

What happens if the size and length parameters are not multiples of the page
size? Are they rounded up or do you get an error?
shm_ctl() returns -1 end errno is ENOSYS (warning: it uses ENOSYS for
all errors).

> -Warren "Too Lazy To Test It" Peece

Armin Steinhoff

Re: Article:Talking to hardware under QNX Neutrino

Post by Armin Steinhoff » Sat Nov 25, 2000 10:25 pm

Debbie Kane wrote:
Talking to hardware under QNX Neutrino
By Dave Donohoe, QNX Software Systems Ltd.
http://support.qnx.com/support/articles ... dware.html

[ clip ... ]

#include
#include
#include

main()
{
struct pci_dev_info info;
-------------------------------------------------------------------------------
The struct 'pci_dev_info' is not LINUX compatible!

I'm missing a clear description of the definion of
the struct pci_dev_info!!

What is e.g. the CpuIoTranslation a.s.o ?
What is e.g. the PciBaseAddress ?? Is it the
contents of a Base Address Register ??

IMHO, if QSSL prefers to use non standard PCI
structures ... then they should
provide a clear decription of those structures!
--------------------------------------------------------------------------------
void *hdl;
int i;

memset(&info, 0, sizeof (info));

if (pci_attach(0) < 0) {
perror("pci_attach");
exit(EXIT_FAILURE);
}

/*
* Fill in the Vendor and Device ID for a 3dfx VooDoo3
* graphics adapter.
*/
info.VendorId = 0x121a;
info.DeviceId = 5;

if ((hdl = pci_attach_device(0,
PCI_SHARE|PCI_INIT_ALL, 0, &info)) == 0) {
perror("pci_attach_device");
exit(EXIT_FAILURE);
}
------------------------------------------------------------------------
Is there a way to get access to the standard PCI
config address space ??
------------------------------------------------------------------------
for (i = 0; i < 6; i++) {
if (info.BaseAddressSize > 0)

------------------------------------------------------------------------------------------
How do you determine the size of that piece of
memory or I/O address space ??
Does pci_attach_device() probing the hardware ...
using the addresses provided in the 'Base Address
Registers' of the PCI config space??
------------------------------------------------------------------------------------------

printf("Aperture %d: "
"Base 0x%llx Length %d bytes Type %s\n", i,
PCI_IS_MEM(info.CpuBaseAddress) ?
PCI_MEM_ADDR(info.CpuBaseAddress) :
PCI_IO_ADDR(info.CpuBaseAddress),
info.BaseAddressSize,
PCI_IS_MEM(info.CpuBaseAddress) ? "MEM" : "IO");
}

printf("IRQ 0x%x\n", info.Irq);

pci_detach_device(hdl);
}



Regards

Armin Steinhoff

David Donohoe

Re: Article:Talking to hardware under QNX Neutrino

Post by David Donohoe » Sun Nov 26, 2000 1:15 am

Armin Steinhoff <A-Steinhoff@web_.de> wrote:

Debbie Kane wrote:

Talking to hardware under QNX Neutrino
By Dave Donohoe, QNX Software Systems Ltd.
http://support.qnx.com/support/articles ... dware.html

[ clip ... ]

#include
#include
#include

main()
{
struct pci_dev_info info;

-------------------------------------------------------------------------------
The struct 'pci_dev_info' is not LINUX compatible!
I think it was mentioned somewhere in the article that writing
drivers for QNX was different than for more traditional UNIX style
operating systems.
I'm missing a clear description of the definion of
the struct pci_dev_info!!
The fields in this structure are described in the pci_attach_device()
docs.
What is e.g. the CpuIoTranslation a.s.o ?
As the docs say:
CPU to PCI translation value (pci_addr = cpu_addr - translation)

Suggests to me that for a given IO address in PCI space,
you could derive the corresponding address in CPU IO address
space by adding CpuIoTranslation to the PCI address.
What is e.g. the PciBaseAddress ?? Is it the
contents of a Base Address Register ??
The six elements in the PciBaseAddress array correspond to the
six PCI Base Address registers, in a PCI devices configuration
space. The addresses in PciBaseAddress[] are what the configuration
space registers would are programmed with. The addresses in the
CpuBaseAddress array are the addresses which the CPU would use
to access the device. You would pass the CpuBaseAddress versions
to mmap_device_memory.

Note that on x86 systems, there is a one-to-one mapping between
the CPU and PCI versions of the base address registers. Hence,
the CpuIoTranslation, CpuMemTranslation, and CpuBmstrTranslation
members of the pci_dev_info structure would be zero, on an x86
system.
IMHO, if QSSL prefers to use non standard PCI
structures ... then they should
provide a clear decription of those structures!
Which "standard PCI structures" are you talking about?
Perhaps you are referring to the standard PCI configuration
space layout, which correlates to the _pci_config_regs
structure in /usr/include/hw/pci.h?

Armin Steinhoff

Re: Article:Talking to hardware under QNX Neutrino

Post by Armin Steinhoff » Sun Nov 26, 2000 10:34 am

David Donohoe wrote:
Armin Steinhoff <A-Steinhoff@web_.de> wrote:

Debbie Kane wrote:

Talking to hardware under QNX Neutrino
By Dave Donohoe, QNX Software Systems Ltd.
http://support.qnx.com/support/articles ... dware.html

[ clip ... ]

#include
#include
#include

main()
{
struct pci_dev_info info;

-------------------------------------------------------------------------------
The struct 'pci_dev_info' is not LINUX compatible!

I think it was mentioned somewhere in the article that writing
drivers for QNX was different than for more traditional UNIX style
operating systems.
/ rant mode ON

The semantic and documentation of the PCI library
calls has nothing to do with the structure of a
driver! What I'm expecting is that the QNX6 PCI
calls are at least compatible with QNX4 ... even
if QSSL doesn't care about the 'rest' of the UNIX
world!!
I'm missing a clear description of the definion of
the struct pci_dev_info!!

The fields in this structure are described in the pci_attach_device()
docs.
Yes ... but I would say the components of that
structure are raw commented.
What is e.g. the CpuIoTranslation a.s.o ?

As the docs say:
CPU to PCI translation value (pci_addr = cpu_addr - translation)
The docs doesn't define what the pci_addr,
cpu_addr and the translations are.
Suggests to me that for a given IO address in PCI space,
you could derive the corresponding address in CPU IO address
space by adding CpuIoTranslation to the PCI address.
I know address translations only in the context of
PCI bridges ...
What is e.g. the PciBaseAddress ?? Is it the
contents of a Base Address Register ??

The six elements in the PciBaseAddress array correspond to the
six PCI Base Address registers,
So and why the different names ... the rest of the
world is calling the PCI Base Address Registers by
its name as used in the PCI configuration address
space structure.
in a PCI devices configuration
space. The addresses in PciBaseAddress[] are what the configuration
space registers would are programmed with. The addresses in the
CpuBaseAddress array are the addresses which the CPU would use
to access the device. You would pass the CpuBaseAddress versions
to mmap_device_memory.
So the CpuBaseAddress is represented by the bits
31:4 (memory) or 31:2 (I/O) of the Base Address
Registers(?).
Note that on x86 systems, there is a one-to-one mapping between
the CPU and PCI versions of the base address registers.
Are you sure? I believe you have to mask out at
least the 4 or 2 least significant bits. (see the
QNX4 macro PCI_MEM_ADDR() ... I hope you know it)
Hence,
the CpuIoTranslation, CpuMemTranslation, and CpuBmstrTranslation
members of the pci_dev_info structure would be zero, on an x86
system.
Important to know. Where can I find these
translation values in the PCI configuration
address space ?
IMHO, if QSSL prefers to use non standard PCI
structures ... then they should
provide a clear decription of those structures!

Which "standard PCI structures" are you talking about?
There is only one PCI standard ... do you know
others?
Perhaps you are referring to the standard PCI configuration
space layout, which correlates to the _pci_config_regs
structure in /usr/include/hw/pci.h?
I refer to the PCI header type 00h and 01h.

/ rant mode OFF

BTW .. there are not only standards, there is also
a product line with a design history and this must
be attended in order to provide compatibility and
portability ... at least between QNX4 and QNX6.

I'm not very happy with the _dotted_ product line
of QSSL.

Armin

Warren Peece

Re: Article:Talking to hardware under QNX Neutrino

Post by Warren Peece » Sun Nov 26, 2000 5:37 pm

Oh I just couldn't stay out of this one... :)


"Armin Steinhoff" <A-Steinhoff@web_.de> wrote in message
news:3A20E726.C3104B14@web_.de...
David Donohoe wrote:

Armin Steinhoff <A-Steinhoff@web_.de> wrote:


--------------------------------------------------------------------------
-----
The struct 'pci_dev_info' is not LINUX compatible!
My attitude: Good! If I wanted a Linux clone I'd use Linux. I much prefer
the clean QNX6 slate with concepts borrowed from wherever they make sense.
I think it was mentioned somewhere in the article that writing
drivers for QNX was different than for more traditional UNIX style
operating systems.

/ rant mode ON

The semantic and documentation of the PCI library
calls has nothing to do with the structure of a
driver! What I'm expecting is that the QNX6 PCI
calls are at least compatible with QNX4 ... even
if QSSL doesn't care about the 'rest' of the UNIX
world!!
How compatible is Microsoft Windows with the 'rest' of the UNIX world? What
about (ToBeOrNotTo)BeOs? I really doubt that anybody is going to be futzing
with the PCI calls if they're not writing a driver, which is different under
QNX than it is on other systems, so why are you bitching that it's not the
same? It just doesn't make sense.
I'm missing a clear description of the definion of
the struct pci_dev_info!!

The fields in this structure are described in the pci_attach_device()
docs.

Yes ... but I would say the components of that
structure are raw commented.
So what you're really saying is that even though it's documented, you think
it could be made clearer. The difference between "missing a clear
description" and "documentation could be made clearer" is that the the first
one makes it sound like there is no documentation at all (which is what I
assumed you meant when I first read this, and it appears that David thought
the same thing) and the second makes it sound like even after reading the
documentation there were still many uncertainties about what exactly the
various fields were for.
What is e.g. the CpuIoTranslation a.s.o ?

As the docs say:
CPU to PCI translation value (pci_addr = cpu_addr - translation)

The docs doesn't define what the pci_addr,
cpu_addr and the translations are.

Suggests to me that for a given IO address in PCI space,
you could derive the corresponding address in CPU IO address
space by adding CpuIoTranslation to the PCI address.

I know address translations only in the context of
PCI bridges ...

What is e.g. the PciBaseAddress ?? Is it the
contents of a Base Address Register ??

The six elements in the PciBaseAddress array correspond to the
six PCI Base Address registers,

So and why the different names ... the rest of the
world is calling the PCI Base Address Registers by
its name as used in the PCI configuration address
space structure.
Hmmmm... "PciBaseAddress" array[6], and six "PCI Base Address registers".
Uhm, how close are you trying to get here, down to the spacing and
capitalization? C'mon you're whining about nothing.
in a PCI devices configuration
space. The addresses in PciBaseAddress[] are what the configuration
space registers would are programmed with. The addresses in the
CpuBaseAddress array are the addresses which the CPU would use
to access the device. You would pass the CpuBaseAddress versions
to mmap_device_memory.

So the CpuBaseAddress is represented by the bits
31:4 (memory) or 31:2 (I/O) of the Base Address
Registers(?).

Note that on x86 systems, there is a one-to-one mapping between
the CPU and PCI versions of the base address registers.

Are you sure? I believe you have to mask out at
least the 4 or 2 least significant bits. (see the
QNX4 macro PCI_MEM_ADDR() ... I hope you know it)
If he's not sure, I am. Just because the PCI standard stuffs some other
flags in the low order bits of the addresses definitely does not imply that
there are two address ranges, one for the CPU and one or the PCI bus. Those
bits mean various things and since addresses have to be aligned anyway, it
was no doubt a convenient place to present them.
Hence,
the CpuIoTranslation, CpuMemTranslation, and CpuBmstrTranslation
members of the pci_dev_info structure would be zero, on an x86
system.

Important to know. Where can I find these
translation values in the PCI configuration
address space ?

IMHO, if QSSL prefers to use non standard PCI
structures ... then they should
provide a clear decription of those structures!

Which "standard PCI structures" are you talking about?

There is only one PCI standard ... do you know
others?
Yeah great Armin. There's one "standard structure" which is the data and
offsets defined within the PCI address space. QNX has chosen to load that
data from PCI space to memory into structures of their design. Those QNX
structures are the ones David is referring to, and you're referring to the
one in PCI address space, so you two are talking about totally different
things. If you absolutely have to deal with everything just as the spec
says it is, then you should bypass the QNX PCI library completely and dive
right in the PCI address space on your own. Nobody's got a gun to your head
forcing you to use their apparently bastardized in your opinion libraries
and structures.
Perhaps you are referring to the standard PCI configuration
space layout, which correlates to the _pci_config_regs
structure in /usr/include/hw/pci.h?

I refer to the PCI header type 00h and 01h.

/ rant mode OFF
Somehow I doubt this very much.
BTW .. there are not only standards, there is also
a product line with a design history and this must
be attended in order to provide compatibility and
portability ... at least between QNX4 and QNX6.
Why on earth would you want QNX6 to be compatible with QNX4 down to the
device driver level? That's gotta be one of the stupidest comments I've
ever heard. If that's your priority then stick with QNX4 for God's sake.
The architectural differences between the two (which I'm wholly thankful
for) prohibit any such low level portability. I'm not just talking about
the PCI data structures either, I'm talking about the shared objects and CAM
and io-net structure and so on. QNX6 has some major advances and I'd be
pissed off if they were all abandoned in the name of compatibility with
QNX4.
I'm not very happy with the _dotted_ product line
of QSSL.

Armin
Then you should stop using it and go bitch somewhere else. If you've got
something constructive to say about bugs or the documentation that's fine,
but sitting there whining about structure field names and that QNX6 isn't
like every other O.S. at the device driver level is simply pathetic. It
seems as if you just like to hear yourself complain.

-Warren Peece

Armin Steinhoff

Re: Article:Talking to hardware under QNX Neutrino

Post by Armin Steinhoff » Sun Nov 26, 2000 8:21 pm

Warren Peece wrote:
Oh I just couldn't stay out of this one... :)

"Armin Steinhoff" <A-Steinhoff@web_.de> wrote in message
news:3A20E726.C3104B14@web_.de...

David Donohoe wrote:

Armin Steinhoff <A-Steinhoff@web_.de> wrote:


------------------------------------------------------------------------
The struct 'pci_dev_info' is not LINUX compatible!

My attitude: Good! If I wanted a Linux clone I'd use Linux. I much prefer
the clean QNX6 slate with concepts borrowed from wherever they make sense.
You are missing my point completely ... here some
infos especially for you.

From the article "QNX Opens Platform for
Developers / OTTAWA, April 24, 2000":

"In addition, QNX Software has integrated a high
level of Linux compatibility into its new platform
so that Linux developers will feel right at home.
As a result, e-device builders can now leverage
the enormous pool of Linux talent, while building
their products on reliable, market-proven QNX
technology."

[ clip .. ]
Which "standard PCI structures" are you talking about?

There is only one PCI standard ... do you know others?

Yeah great Armin. There's one "standard structure" which is the data and
offsets defined within the PCI address space.
There are _two_ header types.... that means there
are at least 2 standard structures of the single
PCI standard !
QNX has chosen to load that
data from PCI space to memory into structures of their design.
I can live with it as long as their design is well
documented ...

[ clip ..]
Then you should stop using it and go bitch somewhere else.
I don't discuss on that impossible level.

Armin

Post Reply

Return to “qdn.public.articles”