VME Frontend Controller Hardware Configuration
The data acquistion set up for nuclear physics experiments at ELBE is based on a mixed synchronized readout of various digitizers in different form factors (crates). The central processor units (CES RIO2 8062 and CES RIO3 8064) reside in VME crates and run the readout software MBS developped at GSI by Dr. Nikolaus Kurz and Dr. Hans Essel (the work of all people who contributed to MBS is gratefully acknowledged).
The CPU's use the real-time Unix-based operating system Lynx RTOS.
In the following table the proper hardware configuration settings are listed. They are stored in a non-volatile memory which is accessed using PPC_MON?>
Both machines boot diskless via TFTP from the Linux-based boot server URANIA.
The operating system and the user disk space resides in /lynx/RIO2_2.5 (/lynx/RIO3_3.1 for rio3) on this machine, as well. The Lynx-directory is NFS-mounted during startup and the proper machine names are assigned automatically.
Remarks:
For each new machine added one has to ask at GSI for a new boot file which refers to the new machine name. This procedure ensures proper LynxOS licences. Each machine has to be included into the /etc/exports file on our boot server urania and the command exportfs -a has to be issued.
Furthermore, in order to access machines behind the firewall at the ELBE accelerator some changes in /home/lynx/RIO2_2.5/net_boot have been done:
# set new netmask and add default route ifconfig amd0 `/bin/hostname` netmask 255.255.252.0 route add 149.220.0.0 149.220.60.1 |
First line ensures the proper subdomain access (normally it is 255.255.255.0) and second line adds all machines in the 56 subdomain to the routing table behind the firewall. Don't take the IP-address directly here, since we have several machines which boot the same code.
Additionally, we then mount some local file systems on urania to get some extra disk space available.
# mkdir /mbs echo "mounting urania:/home/lynx/RIO2_2.5/mbs on /mbs readonly" /bin/mount -o ro urania:/home/lynx/RIO2_2.5/mbs /mbs #echo "mounting urania:/home/data on /mnt readonly" #/bin/mount -o rw urania:/home/data /mnt echo "mounting urania:/home/data2 on /mnt" /bin/mount -o rw urania:/home/data2 /mnt # |
Hardware settings:
Parameter settings for rio, rio2, rio3, rio4, rio5, and rio6.fz-rossendorf.de | ||
Boot server: | urania / zirkonia | |
BOOT parameters | ||
boot_flags [afmnNS] : | a | |
boot_device [s<1-2>d<0-7>, le, ... fp] : | le | |
boot_filename : | /tftpboot/lynx/2.5_rio2_net.boot.FZR /tftpboot/lynx/3.1_rio3_net.boot.FZR /tftpboot/lynx/4.0_rio4_net.boot.FZD |
|
boot_rootfs [s<1-2>d<0-7>, rd] : | rd | |
boot_delay : | 7 sec | |
boot_address : | 4000 | |
boot_size : | 6000000 for RIO | NRF setup |
boot_size : | 2000000 for RIO2 | FFS setup |
boot_size : | 2000000 for RIO3 | NToF setup |
boot_size : | 8000000 for RIO4 | RPC setup |
boot_size : | 8000000 for RIO5 | RPC setup |
boot_size: | 4000000 for RIO6 | NToF setup |
boot_size: | 4000000 for RIO7 | NToF setup |
boot_size: | 8000000 for RIO40 | GiPS setup |
boot_size: | 8000000 for RIO41 | GiPS setup |
boot_fast [ y n ] : | n | |
boot_filetype [ binary auto lynx prep srec ] : | binary | |
INTERNET parameters | ||
inet_host [dotted decimal] : | 149.220.60.46 for RIO | NRF setup |
inet_host [dotted decimal] : | 149.220.60.47 for RIO2 | FFS setup |
inet_host [dotted decimal] : | 149.220.60.45 for RIO3 | NToF setup |
inet_host [dotted decimal] : | 149.220.60.53 for RIO4 | RPC setup |
inet_host [dotted decimal] : | 149.220.60.48 for RIO5 | RPC setup |
inet_host [dotted decimal] | 149.220.60.30 for RIO6 | NToF setup |
inet_host [dotted decimal] | 149.220.60.49 for RIO7 | NToF setup |
inet_host [dotted decimal] | 149.220.60.40 for RIO40 | GiPS setup |
inet_host [dotted decimal] | 149.220.60.65 for RIO41 | GiPS setup |
inet_bplane [dotted decimal] : | 0.0.0.0 | |
inet_bootserver [dotted decimal] : | 149.220.61.149 149.220.60.12 |
urania zirkonia |
inet_gateway [dotted decimal] : | 149.220.4.1 | |
inet_nameserver [dotted decimal] : | 149.220.4.2 | |
inet_protocol [ arpa bootp ] : | arpa | |
inet_mount : | urania:/home/lynx/fzd zirkonia:/home/lynx/fzd |
|
VME parameters | ||
VME arbitration | ||
vme_arb_mode [ prio rrs ] : | prio | |
vme_arb_mode [ prio rrs ] : | rrs for RIO6 | |
vme_arb_bto [ 0 16 64 256 ] : | 16 | |
vme_arb_bto [ 0 16 64 256 ] : | 32 for RIO6 | |
VME requester | ||
vme_req_mode [ ror rwd fair ] : | rwd | |
vme_req_mode [ ror rwd fair ] : | fair for RIO6 | |
vme_req_hidden [ y n ] : | n | |
vme_req_level [ 0 1 2 3 ] : | 3 | |
vme_req_level [ 0 1 2 3 ] : | 1 for RIO6 | |
vme_req_lto [ 8 16 32 64 ] : | 32 | |
vme_req_retry [ y n ] : | n | |
VME slave port | ||
vme_slv_a24base [0xX00000<.e,.>] : | 0x000000 disabled | |
vme_slv_a32base [0xXX000000<.e,.d>] : | 0x00000000 disabled | |
vme_slv_a32size [ 16 32 64 128 ] : | 16 | |
vme_slv_latency : | 127 clock cycles | |
VME master port | ||
VME block mover | ||
vme_bma_swap [ noswap autoswap ] : | autoswap | |
vme_bma_am [ a24u ...am14 ] : | a32u | |
vme_bma_dsz [ d32 d64 ] : | d32 | |
vme_bma_vbsz [ 0 1 2 4 8 16 32 ] : | 4 | |
vme_bma_pbsz [ 0 4 8 16 ] : | 4 | |
vme_bma_inc [ y n ] : | y | |
vme_bma_space [ pio pmem pmrm sysmem ] : | sysmem | |
DIAGNOSTIC flags | ||
diag_sysmem [y/n]: | n | |
diag_mmu [y/n]: | n | |
diag_intctl [y/n]: | n | |
diag_serial [y/n]: | n | |
diag_timers [y/n]: | n | |
diag_fifos [y/n]: | n | |
diag_keyboard [y/n]: | n | |
diag_thermo [y/n]: | n | |
diag_pci [y/n]: | n | |
diag_ethernet [y/n]: | n | |
diag_pmc1 [y/n]: | n | |
diag_pmc2 [y/n]: | n | |
diag_vme [y/n]: | n | |
diag_bridge [y/n]: | n | |
BRIDGE parameters | ||
BRIDGE ECC control | ||
bridge_ecc_count : | 0 | |
BRIDGE PCI control | ||
bridge_pci_serr [ y n ] : | n | |
bridge_pci_parity [ y n ] : | n | |
bridge_pci_watchdog [ y n ] : | n | |
bridge_pci_disc : | 0 | |
INTERRUPT parameters | ||
intr_mask_err [y/n]: | y | |
intr_debug [y/n]: | n | |
CACHE parameters | ||
Instruction cache control | ||
cache_i_ena [ y n ] : | n | |
Data cache control | ||
cache_d_ena [ y n ] : | n | |
L2 cache control | ||
cache_l2_ena [ y n ] : | n | |
cache_l2_parity [ y n ] : | n | |
User's parameters | ||
para_1 : | ||
para_2 : | ||
para_3 : | ||
para_4 : | ||
para_5 : | ||
para_6 : | ||
para_7 : | ||
para_8 : | ||
para_9 : | ||
para_10 : | ||
para_11 : | ||
para_12 : | ||
para_13 : | ||
para_14 : | ||
para_15 : |
VME Modules Hardware Configuration
VME modules are identfied to the controller by an unambiguous hardware-selectable address dialed in via rotary switches located on the board. The TRIVA3 base address defines lower address space accessible by standard pointers from inside f_user.c. Therefore, each pointer has to be calculated by subtracting the TRIVA3 base address (0x2000000). Addresses below the TRIVA3 base address have to be accessed by a separate VME mapping.VME Modules Base Address | |||
Module | Owner | Serial Number | Base Address |
WIENER VC32-CC32 (No. 00) | FWKK | - | 0x00500000 |
WIENER VC32-CC32 (No. 01) | FWKK | - | 0x00510000 |
WIENER VC32-CC32 (No. 02) | FWKK | 3697015 | 0x00520000 |
WIENER VC32-CC32 (No. 03) | FWKK | 2396013 | 0x00530000 |
FWKK Silena-VME Interface (No. 01) | FWKK | 0 | 0x01000000 |
GSI TRIVA3 | FWKK | - | 0x02000000 |
GSI TRIVA3 | FWKK | - | 0x02000000 |
GSI TRIVA3 | FWKK | - | 0x02000000 |
CAEN V486 GG (No. 00) | FWKK | 30 | 0x03000000 |
CAEN V486 GG (No. 01) | FWKK | 59 | 0x03001000 |
CAEN V512 PLU (No. 00) | FWKK | 41 | 0x03008000 |
CAEN V512 PLU (No. 01) | FWKK | 33 | 0x03009000 |
CAEN V513 I/O (No. 00) | FWKK | 143 | 0x0300a000 |
CAEN V812 CFD (No. 00) | FWKK | 84 | 0x03100000 |
CAEN V812 CFD (No. 01) | FWKK | 85 | 0x03200000 |
CAEN V812 CFD (No. 02) | FWKK | 130 | 0x03300000 |
CAEN V812 CFD (No. 03) | FWKK | 131 | 0x03400000 |
CAEN V812 CFD (No. 04) | FWKK | 132 | 0x03500000 |
CAEN V812 CFD (No. 05) | FWKK | 133 | 0x03600000 |
CAEN V556 ADC (No. 00) | FWKK | 49 | 0x04000000 |
CAEN V556 ADC (No. 01) | FWKK | 50 | 0x04001000 |
CAEN V556 ADC (No. 02) | FWKK | 51 | 0x04002000 |
CAEN V556 ADC (No. 03) | FWKK | 54 | 0x04003000 |
CAEN V556 ADC (No. 04) | FWKK | ? | 0x04004000 |
CAEN V556 ADC (No. 05) | FWKK | ? | 0x04005000 |
CAEN V556 ADC (No. 06) | FWKK | ? | 0x04006000 |
CAEN V556 ADC (No. 07) | FWKK | ? | 0x04007000 |
CAEN V556 ADC (No. 08) | FWKK | ? | 0x04008000 |
CAEN V556 ADC (No. 09) | FWKK | ? | 0x04009000 |
CAEN V1785N ADC (No. 00) | FWKK nELBE | 198 | 0x04100000 |
CAEN V1785N ADC (No. 01) | FWKK nELBE | 204 | 0x04110000 |
CAEN V1785N ADC (No. 02) | FWKK | 226 | 0x04120000 |
CAEN V1785N ADC (No. 03) | FWKK | 227 | 0x04130000 |
CAEN V1785N ADC (No. 04) | FWKK nELBE | 225 | 0x04140000 |
CAEN V775 TDC (No. 00) | FWKK | 162 | 0x05000000 |
CAEN V775 TDC (No. 01) | FWKK | 153 | 0x05010000 |
CAEN V1190 TDC (No. 00) | FWKK nELBE | 24 | 0x05040000 |
CAEN V1190 TDC (No. 01) | FWKK nELBE | 56 | 0x05050000 |
CAEN V1290 TDC (No. 00) | FWKK | 92 | 0x03400000 |
CAEN V1290A TDC (No. 00) | FWKK nELBE | 280 | 0x05070000 |
CAEN V1290A TDC (No. 01) | FWKK nELBE | 286 | 0x05080000 |
CAEN V1290N TDC (No. 00) | FWKH | 0x03100000 | |
CAEN V1290N TDC (No. 01) | FWKH | 0x03300000 | |
CAEN V1290N TDC (No. 02) | FWKK nELBE | 265 | 0x05060000 |
CAEN V1495 GP VME Board (No. 00) | FWKK | 47 | 0x05100000 |
CAEN V1495 GP VME Board (No. 01) | FWKK nELBE | 48 | 0x05110000 |
CAEN V1495 GP VME Board (No. 02) | FWKK | 151 | 0x05120000 |
CAEN V1495 GP VME Board (No. 03) | FWKK | 129 | 0x05130000 |
CAEN V1495 GP VME Board (No. 04) | FWKK nELBE | 363 | 0x05140000 |
CAEN V1495 GP VME Board (No. 05) | FWKK nELBE | 319 | 0x05150000 |
CAEN V1495 GP VME Board (No. 06) | FWKK nELBE | 445 | 0x05160000 |
CAEN V1495 GP VME Board (No. 07) | FWKK nELBE | 458 | 0x05170000 |
CAEN V1495 GP VME Board (No. 08) | FWKK nELBE | 496 | 0x05180000 |
CAEN V1495 GP VME Board (No. 09) | FWKK nELBE | 500 | 0x05190000 |
CAEN V792 QDC (No. 00) | FWKK | 244 | 0x06000000 |
CAEN V792 QDC (No. 01) | FWKK | 243 | 0x06010000 |
CAEN V792 QDC (No. 02) | FWKK | 329 | 0x06020000 |
CAEN V792 QDC (No. 03) | FWKK | 330 | 0x06030000 |
CAEN V792 QDC (No. 04) | FWKK | 331 | 0x06040000 |
CAEN V965 Dual range QDC (No. 00) | FWKK | 2392 | 0x06200000 |
CAEN V965 Dual range QDC (No. 01) | FWKK | 2423 | 0x06210000 |
CAEN V965A Dual range QDC (No. 00) | FWKK nELBE | 190 | 0x06100000 |
CAEN V965A Dual range QDC (No. 01) | FWKK nELBE | 206 | 0x06110000 |
CAEN V965A Dual range QDC (No. 02) | FWKK nELBE | 207 | 0x06120000 |
CAEN V874A QDC/TDC (No. 00) | FWKK nELBE | 181 | 0x10000000 |
CAEN V874A QDC/TDC (No. 01) | FWKK nELBE | 182 | 0x11000000 |
CAEN V874A QDC/TDC (No. 02) | FWKK nELBE | 200 | 0x12000000 |
CAEN V874A QDC/TDC (No. 03) | FWKK | 210 | 0x13000000 |
CAEN V874A QDC/TDC (No. 04) | FWKK nELBE | 211 | 0x14000000 |
CAEN V874A QDC/TDC (No. 05) | FWKK nELBE | 212 | 0x15000000 |
CAEN V874A QDC/TDC (No. 06) | FWKK | 218 | 0x16000000 |
CAEN V874A QDC/TDC (No. 07) | FWKK nELBE | 225 | 0x17000000 |
CAEN V874A QDC/TDC (No. 08) | FWKK nELBE | 227 | 0x18000000 |
CAEN V874A QDC/TDC (No. 09) | FWKK | 228 | 0x19000000 |
CAEN V874A QDC/TDC (No. 10) | FWKK | 229 | 0x1a000000 |
CAEN V874A QDC/TDC (No. 11) | FWKK nELBE | 230 | 0x1b000000 |
CAEN V874A QDC/TDC (No. 12) | FWKK | 232 | 0x1c000000 |
CAEN V874A QDC/TDC (No. 13) | FWKK | 233 | 0x1d000000 |
CAEN V874A QDC/TDC (No. 14) | FWKK | 234 | 0x1e000000 |
CAEN V874A QDC/TDC (No. 15) | FWKK | 235 | 0x1f000000 |
CAEN V874A QDC/TDC (No. 16) | FWKK nELBE | 236 | 0x20000000 |
CAEN V874A QDC/TDC (No. 17) | FWKK nELBE | 238 | 0x21000000 |
CAEN V874A QDC/TDC (No. 18) | FWKK | 239 | 0x22000000 |
CAEN V874A QDC/TDC (No. 19) | FWKK | 241 | 0x23000000 |
SIS 3801 Scaler | FWKK | - | 0x07000000 |
SIS 3820 Scaler | FWKK | 115 | 0x08000000 |
SIS 3820 Scaler | FWKK nELBE | 116 | 0x09000000 |
SIS 3820 Scaler | FWKK nELBE | 211 | 0x0c000000 |
SIS 3820 Scaler | FWKK nELBE | 212 | 0x0d000000 |
SIS 3700 FERA-Interface | FWKK | 0x0a000000 | |
SIS 3700 FERA-Interface | FWKK | 0x0b000000 | |
Suggested values for RIO parameters | |||
RAM size | boot_size | LOC_PIPE_BASE | PIPE_SEG_LEN |
64MB = 0x04000000 | 0x02000000 | 0x02000000 | 0x02000000 |
128MB = 0x08000000 | 0x04000000 | 0x04000000 | 0x04000000 |
256MB = 0x10000000 | 0x04000000 | 0x04000000 | 0x0c000000 |
Without any special mapping (for example for Wiener'sVC32) the standard address modifier used in MBS is 0x09 (non-priviledged data access) and data size depends on the register to be read. It can be either D32 or D16.
data size | pointer type | data type |
D32 | static long volatile *pl_my_address | unsigned long my_data |
D16 | static unsigned short volatile *ps_my_address | unsigned short my_data |
Addon: RIO4 VME mapping as described by N. Kurz
MBS Note 7 N.Kurz, EE, GSI, 3-June-2009 RIO4:
system memory - and VME mapping
PPC_Mon: boot_size: The parameter boot_size in the NVRAM of the RIO4 specifies the maximum memory size seen by the operating system (LynxOS, Linux). It can be set in the monitor program PPC_Mon. If set to 0 or 0x20000000 the whole memory (512 MB) is used by the system.
With boot_size some additional steering can be done:
------------------------------------------------------------------------------------------------------------
If boot_size is set up to (at maximum) 0x10000000 (256 MB) two direct mappings for memory and VME are activated at kernel startup:
1) Direct address page 0xe0000000 points to memory address 0x0. This page has a size of 256 MB (0x10000000 bytes) and covers the first half of the memory.
2) Direct address page 0xf0000000 points to VME address 0x0 for A32 accesses (address modifier: 0x9). This page has also size of 256 MB and covers therefore the VME address range from 0x0 to 0x0ffffffc For both cases the direct mapping provides the fastest access modes possible on this hardware. I.e. the single cycle VME A32 speed is ~ 70 % faster than with other mapping procedures. See below. The covered space can be directly accessed by adding the memory or VME address to the appropriate page base. The disadvantage of using the direct VME page: No bus error occurs if one accesses a VME address, with no related hardware. Instead the processor simply gets stuck.
------------------------------------------------------------------------------------------------------------
If boot_size is set to values larger than 0x10000000, only direct memory mapping is activated at kernel startup:
1) Direct address page 0xe0000000 points to memory the address 0x0. This page has a size of 512 MB (0x20000000 bytes) and covers the whole memory. The usage of this direct memory mapping is the same as described above. Attention: In this setting all programs relying on direct (and fast) VME A32 mapping will fail without notification!
------------------------------------------------------------------------------------------------------------
Especially check LOC_PIPE_TYPE, LOC_PIPE_BASE and LOC_PIPE_SEG_LEN when working with the MBS. The boot_size parameter can be checked with "/bin/ces/hwconfig show boot_size" in a running LynxOS system.
-----------------------------------------------------------------------------------------------------------
Memory mapping:
1) Physically non-consecutive memory mapping:
a) malloc: With the malloc function only ~ 33MB in total can be allocated on RIO3 (LynxOS 3.1) and RIO4 (LynxOS 4.0) in a process. Anyhow, malloc can not be used as an inter process method.
b) Process shared memory: Mapping with the shm_open, ftruncate and the mmap function: In LynxOS 3.1 and earlier versions, shared memory for inter process communications was accomplished with the smem_get function. This function is not available any more for LynxOS 4.0 and therefore for the RIO4. The same functionality can be achieved, by using the functions shm_open, f_truncate and mmap. To allow an easy migration for legacy applications a wrapper function has been developed and put in the MBS sources from version 5.0 onwards. This function has the identical name smem_get and follows the identical syntax as the original smem_get function. To utilize it, the application either has to be linked against the mbs_lib.a library or the file f_smem.c has to be included in the compile chain. This mapping procedure can also be utilized with PCs running LynxOS 4.0.
2) Physically consecutive memory mapping:
a) Direct mapping as explained above on address offset 0xe0000000
b) Process shared consecutive memory: Mapping with the /dev/mem device and mmap In LynxOS 3.1 and earlier versions, consecutively mapped shared memory for inter process communications was accomplished with the smem_create function. This function is not available any more for LynxOS 4.0 and therefore for the RIO4. The same functionality can be achieved, by opening the /dev/mem device and the mmap function. To allow an easy migration for legacy applications a wrapper function has been developed and put in the MBS sources from version 5.0 onwards. This function has the identical name smem_create and follows the identical syntax as the original smem_create function. To utilize it, the application either has to be linked against the mbs_lib.a library or the file f_smem.c has to be included in the compile chain. This mapping process can also be utilized with PCs running LynxOS 4.0.
VME address space mapping: On the RIO3 512 pages of 4 MB each, add up to 2 GB of VME address space, which can be mapped simultaneously. On the RIO4 this size has been reduced to 256 pages of 4 MB, which provides a total space of 1 GB. On the RIO4, this (1) GB has to de distributed between static – and dynamic VME mapping. See below:
1) Static VME mapping:
a) Direct mapping as explained above for A32 (AM: 9).
b) To allow an easy way to map VME address space, static pages have been configured on the RIO3 and on RIO4 in and identical scheme for single cycle accesses. Disadvantage: Statically mapped VME space cannot be freed and is fixed to its (limited) VME address range and access type (address modifier). The static maps have been generously sized for GSI applications, but they appear to be too big in order to leave enough space for flexible dynamic VME mapping on the RIO4. As an acceptable compromise the static mapping for A32 (AM = 9) accesses has been reduced from 256 MB (0x10000000) to 96 MB (0x06000000). The static map for MBLT (D64) VME block transfer has been removed completely. With this solution most legacy programs developed on the RIO3 work also on the RIO4. See VME static pages for RIO3 and RIO4 below:
RIO3 static mapping:
Address Modifier | VME hardware address | Map address | Map size |
0x09(A32) | 0x0 | 0x50000000 | 0x10000000 |
0x39(A24) | 0x0 | 0x4f000000 | 0x1000000 |
0x29(A16) | 0x0 | 0x4E000000 | 0x10000 |
0x0B(BLT) | 0x0 | 0x50000000 | 0x10000000 |
0x08(MBLT) | 0x0 | 0x60000000 | 0x10000000 |
RIO4 static mapping:
Address Modifier | VME hardware address | Map address | Map size |
0x09(A32) | 0x0 | 0x50000000 | 0x6000000 |
0x39(A24) | 0x0 | 0x4f000000 | 0x1000000 |
0x29(A16) | 0x0 | 0x4E000000 | 0x10000 |
Up to LynxOS 3.1 the mapping for single cycle VME access could be done with the smem_create function and the map address and segment size as input parameters. On RIO4 and LynxOS 4.0, either the dev/mem device and mmap to map the segment or the smem_create wrapper can be used as described in section "Process shared consecutive memory". VME Block transfer aspects see below.
2) Dynamic VME single cycle transfer mapping: Up to RIO3, LynxOS 3.1, dynamic mapping was done with the find_controller function. It turns out, that find_controller works only if the VME address range requested is inside the static mapping, which makes it useless for dynamic VME mapping. A set of new functions for VME single cycle mapping are provided by CES for the RIO4 (bus_open, bus_map, bus_unmap and bus_close). Since its usage is not too user friendly, two functions (f_map_vme_rio4, f_unmap_vme_rio4) contained in the file f_map_vme_rio4.c have been developed and can be found under LynxOS in the directory /nfs/groups/daq/usr/kurz/rio4/busmap. With these functions, a template main program to show its syntax and a Makefile VME single cycle mapping on the RIO4 shall be simple. It is planned to move f_map_vme_rio4.c and its prototype into the MBS sources after some extensive testing. In addition a running MBS environment, which makes usage of this mapping method, can be found on /nfs/groups/daq/usr/kurz/mbstest/rio4/rio4_busmap_sicy_vme. Within this MBS setup it has furthermore been tested, that the following PPC_Mon and MBS setup.usf parameters work fine. See also direct memory mapping explained above: PPC_Mon setup.usf
boot_size | LOC_PIPE_TYPE | LOC_PIPE_BASE | PIPE_SEG_LEN |
0x4000000 | 1 (direct mapping) | 0xE4000000 | 0xC000000 |
0x8000000 | 1 (direct mapping) | 0xE8000000 | 0x8000000 |
0x10100000 | 1 (direct mapping) | 0xF01000000 | 0xff00000 |
3) Dynamic VME Block transfer mapping:
As mentioned in 1), the static maps for VME block transfer have been removed on the RIO4, in order to free space for dynamic VME mapping. For dynamic VME block mapping a new function xvme_map is provided by CES. A full functioning MBS environment using dynamic mapping for single cycle VME and the xvme_map function for dynamic VME block mapping can be found under LynxOS in: /nfs/groups/daq/usr/kurz/mbstest/rio4/rio4_busmap_bma_rd. This MBS setup can be used as template for BLT, BLT and the faster 2eVME and 2eSST VME block readout, if the VME slave modules provide these access modes. Various VME slaves have been tested: SAM3/5 (BLT, MBLT), VME memory from Micro Memory Inc (BLT), CAEN V878 (BLT, MBLT) and a RIO4 as VME slave (BLT, MBLT, 2eSST). Especially 2eSST transfers from RIO4 to RIO4 achieve speeds of up to 150 MB/s and are the fastest VME mode, supported by MBS VME hard- and software. All three pipe settings shown above in 2) have been tested and work with VME block transfers.
(Very) large MBS sub-event pipes, (very) large consecutive shared mapped memory: If an application needs very big shared memory for either fast online storage of data, or the MBS setup needs a large sub-event pipe, the following shall be done and considered: The memory for LynxOS shall be set to a minimum of 0x4000000 (64 MB) with the boot_size parameter. With this setting 0x1c000000 (448 MB) are at maximum available for user (shared and consecutive) mapping. Since boot_size is set below 0x10000000, fast direct single cycle A32 VME is enabled within the restriction described above, but also static and dynamic (single cycle and block) VME mapping can be used. Fast direct memory mapping can only be used for pipes smaller or equal 0x10000000 (256 MB). Therefore this mapping procedure cannot be used in this case. Instead the memory needs to be mapped with the /dev/mem device and the mmap function, or the wrapper function smem_create. See 2) in "Memory mapping" above. Please note that memory mapped with this procedure can be accessed with a lower speed than with direct memory mapping, but this shall come only into effect for fast VME block transfers (MBLT, 2eSST) into this memory. In /nfs/groups/daq/usr/kurz/mbstest/rio4/rio4_dimap_sicy_mmap_pipe a running MBS setup, which uses a 448 MB sub-event pipe and direct or static VME single cycle mapping can be found. In /nfs/groups/daq/usr/kurz/mbstest/rio4/rio4_bus_map_bma_rd_mmap_pipe a running MBS setup, which uses dynamic VME single cycle and - VME block readout together with a 448 MB sub-event pipe can be found. Thank you for your enduring attention!
Suggestions:
- Fix boot_size to 0x08000000 in PPC_MON>
- Fix LOC_PIPE_BASE to 0xe8000000 in setup.usf
- Fix PIPE_SEG_LEN to 0x08000000 in setup.usf