Welcome, Guest
You have to register before you can post on our site.



Search Forums

(Advanced Search)

Forum Statistics
» Members: 249
» Latest member: Rotal95
» Forum threads: 77
» Forum posts: 358

Full Statistics

Online Users
There are currently 19 online users.
» 0 Member(s) | 19 Guest(s)

Latest Threads
Error kernel paging reque...
Forum: Public Support
Last Post: ariefgrand
02-14-2018, 03:19 PM
» Replies: 1
» Views: 234
XZD 2017.1 Release
Forum: Knowledge Base
Last Post: Nathan.Studer
02-12-2018, 03:12 PM
» Replies: 0
» Views: 59
XZD is Now Virtuosity
Forum: Knowledge Base
Last Post: Nathan.Studer
02-12-2018, 03:09 PM
» Replies: 0
» Views: 51
Forum: Knowledge Base
Last Post: Nathan.Studer
02-12-2018, 02:53 PM
» Replies: 3
» Views: 135
Embedded Virtualization E...
Forum: Knowledge Base
Last Post: Nathan.Studer
02-12-2018, 02:34 PM
» Replies: 0
» Views: 46
How can we compile libvch...
Forum: Public Support
Last Post: Nathan.Studer
01-24-2018, 01:22 PM
» Replies: 9
» Views: 843
DomU I/O and Communicatio...
Forum: Public Support
Last Post: ariefgrand
01-08-2018, 05:02 PM
» Replies: 11
» Views: 3,520
DMA from DomU on ZynqMP
Forum: Public Support
Last Post: ariefgrand
12-12-2017, 09:18 AM
» Replies: 2
» Views: 476
DomU and PL on ZCU102 Zyn...
Forum: Public Support
Last Post: Nathan.Studer
12-11-2017, 02:12 PM
» Replies: 1
» Views: 364
Zynq mpsoc dev kit not ab...
Forum: Public Support
Last Post: Nathan.Studer
11-08-2017, 01:24 PM
» Replies: 4
» Views: 3,412

  Error kernel paging request DomU zcu102
Posted by: ariefgrand - 02-14-2018, 09:59 AM - Forum: Public Support - Replies (1)

Hi guys,

I have a problem with DomU in Xen which I ran on ZCU102 board.
So i made a paravirtualized driver to enable FPGA access from DomU. The communication between Dom0 and DomU is done via a ring.
From the DomU, the access to FPGA can be done by opening a character device /dev/mydevice.
However, everytime I called open from userspace (DomU), it gives an error like this:

[   47.627911] Unable to handle kernel paging request at virtual address 4000200000001
[   47.627939] pgd = ffffffc00d5ad000
[   47.627948] [4000200000001] *pgd=0000000000000000
[   47.627959] , *pud=0000000000000000

[   47.627976] Internal error: Oops: 96000004 [#1] SMP
[   47.627988] Modules linked in: fpga_frontend(O) uio_pdrv_genirq
[   47.628015] CPU: 0 PID: 1847 Comm: fpga_test Tainted: G           O    4.9.0-xilinx-v2017.3 #1
[   47.628031] Hardware name: XENVM-4.8 (DT)
[   47.628041] task: ffffffc00ea62e80 task.stack: ffffffc00df2c000
[   47.628061] PC is at try_module_get+0x4/0xb0
[   47.628075] LR is at cdev_get+0x20/0x60
[   47.628085] pc : [<ffffff8008101d8c>] lr : [<ffffff800818fd40>] pstate: 00000145
[   47.628101] sp : ffffffc00df2fb10
[   47.628111] x29: ffffffc00df2fb10 x28: ffffff800818f848
[   47.628127] x27: 0000000000000000 x26: ffffffc00db39000
[   47.628143] x25: ffffffc00df2fbec x24: 00000000000000f4
[   47.628161] x23: ffffffc00e812800 x22: 000000000f400000
[   47.628178] x21: 0000000000000000 x20: ffffffc00db39000
[   47.628196] x19: 0004000200000001 x18: 0000000000040900
[   47.628212] x17: 0000000000411230 x16: ffffff8008189e60
[   47.628229] x15: 000000000000066b x14: 981060a88830e000
[   47.628245] x13: 0000000000000000 x12: a8c85030f88010e8
[   47.628261] x11: 0000000000000020 x10: d0e0b4bfbd8fa4be
[   47.628278] x9 : b6f390d5b3398e5b x8 : ffffffc00e1bd520
[   47.628295] x7 : 000000000000001f x6 : 0000000000000000
[   47.628312] x5 : ffffffc00e1bd500 x4 : 0000000000001000
[   47.628328] x3 : 0000000000000000 x2 : ffffff800818fd80
[   47.628345] x1 : ffffffc00db39000 x0 : 0004000200000001

[   47.628370] Process fpga_test (pid: 1847, stack limit = 0xffffffc00df2c020)
[   47.628384] Stack: (0xffffffc00df2fb10 to 0xffffffc00df30000)
[   47.628398] fb00:                                   ffffffc00df2fb30 ffffff800818fd90
[   47.628415] fb20: ffffffc00dbe8180 000000000f400000 ffffffc00df2fb40 ffffff800850558c
[   47.628432] fb40: ffffffc00df2fba0 ffffff80081901e4 ffffff8008d65c98 ffffff8008d65000
[   47.628450] fb60: 0000000000000000 ffffffc00db0d558 ffffffc00da7e400 0000000000000000
[   47.628467] fb80: 0000000000000000 0000000000000000 ffffffc00df2fd78 0000000000000000
[   47.628485] fba0: ffffffc00df2fbf0 ffffff80081888f4 ffffffc00da7e400 ffffffc00da7e410
[   47.628503] fbc0: ffffff80081900e8 ffffffc00db0d558 ffffffc00e402600 0000000000000000
[   47.628519] fbe0: ffffffc00da7e400 0000000008197f5c ffffffc00df2fc30 ffffff800818995c
[   47.628535] fc00: ffffffc00da7e400 ffffffc00df2fd78 ffffffc00da7e400 0000000000000006
[   47.628552] fc20: ffffffc00e402600 ffffff800819794c ffffffc00df2fc50 ffffff8008199a18
[   47.628568] fc40: 0000000000020002 0000000000000000 ffffffc00df2fd40 ffffff800819b894
[   47.628584] fc60: 0000000000000003 ffffffc00df2fd78 ffffffc00df2fe98 0000000000000001
[   47.628600] fc80: 0000000080000000 0000000000000015 0000000000000123 0000000000000038
[   47.628616] fca0: ffffff8008962000 ffffffc00df2c000 ffffffc00df2fdf0 0000000000000800
[   47.628632] fcc0: 0000000000000800 0000000000000000 ffffffc00df2fd00 ffffff8008132854
[   47.628648] fce0: ffffffc00df2fe18 ffffffc00df2fd30 ffffffbf00000041 ffffffc00e5e7308
[   47.628664] fd00: ffffffc00df2fdc0 0000000200000000 ffffffc00db0d558 ffffffc00e1bd520
[   47.628680] fd20: ffffffc00e5d9000 0000000013e4a240 ffffffc00df2feb8 0000000000000015
[   47.628697] fd40: ffffffc00df2fe50 ffffff8008189d60 0000000000000003 00000000ffffff9c
[   47.628713] fd60: ffffffc00dccb000 0000007fac2afd58 ffffffc00d438900 ffffffc00e1bd520
[   47.628729] fd80: ffffffc00e5d9000 0000000f83340334 ffffffc00dccb021 0000000000000000
[   47.628745] fda0: ffffffc00e42a240 ffffffc00db0d558 0000000200000101 000000000000004c
[   47.628761] fdc0: 0000000000000000 ffffffc00df2fdd0 ffffffc00df2fe00 ffffff800819a9cc
[   47.628778] fde0: ffffffc00df2fdf0 ffffff80081a93a8 ffffffc00df2fe40 ffffff80081a9508
[   47.628794] fe00: 0000000000020002 00000000ffffff9c ffffffc00dccb000 0000007fac2afd58
[   47.628810] fe20: 0000000080000000 0000000000000015 ffffffc00dccb000 0000000000000000
[   47.628826] fe40: ffffffc00df2fe50 ffffff9c00000002 ffffffc00df2feb0 ffffff8008189e70
[   47.628842] fe60: 0000000000000000 0000007fdaa62e83 ffffffffffffffff 0000007fac2afd58
[   47.628858] fe80: 0000000080000000 0000000000000015 0000000000000000 ffff000000020002
[   47.827776] fea0: 0000010000000006 0000000000000001 0000000000000000 ffffff8008082ef0
[   47.827795] fec0: ffffffffffffff9c 0000007fdaa62e83 0000000000000002 0000000000000000
[   47.827812] fee0: 0000007fac334000 0000000000000001 0000000000411b40 0808f870e868d848
[   47.827830] ff00: 0000000000000038 f0602050180840f8 0101010101010101 0000000000000020
[   47.827848] ff20: a8c85030f88010e8 0000000000000000 981060a88830e000 000000000000066b
[   47.827865] ff40: 0000007fac2afd08 0000000000411230 0000000000040900 0000000000400cd0
[   47.827882] ff60: 0000007fdaa62e83 00000000004008c0 0000000000000000 0000000000000000
[   47.827899] ff80: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[   47.827917] ffa0: 0000000000000000 0000007fdaa62370 0000000000400bfc 0000007fdaa62370
[   47.827936] ffc0: 0000007fac2afd58 0000000080000000 ffffffffffffff9c 0000000000000038
[   47.827954] ffe0: 0000000000000000 0000000000000000 0000001700000017 0017d01f22022000
[   47.827969] Call trace:
[   47.827979] Exception stack(0xffffffc00df2f940 to 0xffffffc00df2fa70)
[   47.827995] f940: 0004000200000001 0000008000000000 ffffffc00df2fb10 ffffff8008101d8c
[   47.828014] f960: ffffff8008b26550 00000000fffffffb ffffffc00da757e8 ffffffc00da70380
[   47.828032] f980: ffffffc00da740d0 ffffffc00da74008 0000000e0df2fc70 ffffffc00e2516d0
[   47.828049] f9a0: 000000000000000e ffffffc00e251000 ffffffc00da7e400 0000000000000001
[   47.828067] f9c0: 0000000000000000 ffffffc00e340000 ffffffc00e2516d0 0000000e0000036a
[   47.828086] f9e0: 0004000200000001 ffffffc00db39000 ffffff800818fd80 0000000000000000
[   47.828103] fa00: 0000000000001000 ffffffc00e1bd500 0000000000000000 000000000000001f
[   47.828121] fa20: ffffffc00e1bd520 b6f390d5b3398e5b d0e0b4bfbd8fa4be 0000000000000020
[   47.828138] fa40: a8c85030f88010e8 0000000000000000 981060a88830e000 000000000000066b
[   47.828154] fa60: ffffff8008189e60 0000000000411230
[   47.828170] [<ffffff8008101d8c>] try_module_get+0x4/0xb0
[   47.828185] [<ffffff800818fd90>] exact_lock+0x10/0x20
[   47.828201] [<ffffff800850558c>] kobj_lookup+0xcc/0x158
[   47.828215] [<ffffff80081901e4>] chrdev_open+0xfc/0x168
[   47.828229] [<ffffff80081888f4>] do_dentry_open.isra.1+0x1dc/0x318
[   47.828245] [<ffffff800818995c>] vfs_open+0x44/0x70
[   47.828260] [<ffffff8008199a18>] path_openat+0x238/0x1000
[   47.828274] [<ffffff800819b894>] do_filp_open+0x64/0xe0
[   47.828287] [<ffffff8008189d60>] do_sys_open+0x128/0x200
[   47.828300] [<ffffff8008189e70>] SyS_openat+0x10/0x18
[   47.828315] [<ffffff8008082ef0>] el0_svc_naked+0x24/0x28
[   47.828329] Code: 88027c01 35ffffa2 d65f03c0 b4000540 (b9400001)
[   47.828401] ---[ end trace 8a28b89e17c62f92 ]---

Even though I do return 0 immediately after open, it still gives the same error. It seems that the error does not come from my driver but I don't have a clue.
I cannot verify whether it is from the board or not as I only have one ZCU102. Perhaps, you have already found this similar problem?
You might be able to tell me where should I start to debug this. Thank you.


Print this item

  XZD 2017.1 Release
Posted by: Nathan.Studer - 02-12-2018, 03:12 PM - Forum: Knowledge Base - No Replies

DornerWorks is pleased to (belatedly) announce the 2017.1 based version of XZD/Virtuosity!

XZD Package:  http://dornerworks.com/wp-content/upload...170601.tgz
User's Manual:  http://dornerworks.com/wp-content/upload...anual2.pdf[url=http://dornerworks.com/wp-content/uploads/2017/01/Xen-Zynq-Distribution-XZD-Users-Manual.pdf][/url]

New Features
Support for Petalinux v2017.1
Yocto based rootfs


Print this item

  XZD is Now Virtuosity
Posted by: Nathan.Studer - 02-12-2018, 03:09 PM - Forum: Knowledge Base - No Replies

With the coming addition of a iMX8 Xen Distribution and possibly other SoC families in the future, the name Xen Zynq Distribution is too specific for the variety of platforms we intend to support.  To provide for consistency among these offerings, XZD is being rebranded as Virtuosity.  (Not to be confused with the cheesy Denzel Washington film of the same name.)

Future versions of the distribution for the ZynqMP platform will be posted to the Virtuosity on ZynqMP page .


Print this item

  Embedded Virtualization E-mail Series
Posted by: Nathan.Studer - 02-12-2018, 02:34 PM - Forum: Knowledge Base - No Replies

DornerWorks has created a three part embedded virtualization e-mail series to discuss the benefits of embedded virtualization and how it compares with and complements hardware partitioning.

You can register for the e-mail series here .

Feel free to respond with any feedback on the series or with suggestions for future topics.


Print this item

Posted by: harias - 02-09-2018, 11:17 AM - Forum: Knowledge Base - Replies (3)

As we know, ENEA supports Zynq UltraScale:


So, wondering if ENEA RTOS will be supported by XENZYNQ? 
Does XZD offers the tools and platform to port ENEA RTOS from scratch?

Print this item

  How can we compile libvchan
Posted by: ariefgrand - 01-08-2018, 09:51 AM - Forum: Public Support - Replies (9)

Hello guys,

I have a problem when cross-compiling the libvchan tools to be launched on Dom0 (or DomU) on ZCU102 .
First, I cloned the buildroot from https://github.com/dornerworks/buildroot.git and I got error in the host-python3 3.4.2 building.
Then, I thought that maybe the repo is not up-to-date, so I searched another repo and I cloned git://git.busybox.net/buildroot.
This repo works and I was able to perform make till the end. I followed the steps in the guide Section 5.3 with a little modification as I didn't use the same buildroot :

Quote:./configure --host=aarch64-buildroot-linux-uclibc --enable-tools
$ make dist-tools CROSS_COMPILE=aarch64-buildroot-linux-uclibc- XEN_TARGET_ARCH=arm64 CONFIG_EARLY_PRINTK=zynqmp

However, when I tried to execute the command ./vchan-node1 server read 0 data/vchan, I only got -sh: ./vchan-node1: No such file or directory Sad

I followed the Petalinux flow to generate the xen, dom0 and domU. I don't have gcc on the board. That's why I tried to cross-compile libvchan.
Has anybody tried this before? If my method is wrong, would you tell me how can I do the cross-compilation and which tools should I use?
Or at least, would you tell me which tool support libxen in order to compile libvchan?

Thank you so much.

Print this item

  DMA from DomU on ZynqMP
Posted by: ariefgrand - 12-08-2017, 06:46 PM - Forum: Public Support - Replies (2)

Hello guys,

Is there any of you that already performed a DMA from DomU on Zynq Ultrascale+?
For instance, doing write/read using /dev/mem or with /dev/uio?
Thank you.

Best regards,

Print this item

  DomU and PL on ZCU102 Zynq Ultrascale+
Posted by: ariefgrand - 12-07-2017, 03:27 PM - Forum: Public Support - Replies (1)

Hello guys,

Is there any tutorial or example on how DomU can access an IP in PL of Zynq Ultrascale+?
The IP uses AXI slave interface.

Thank you so much.

Best regards,

Print this item

  Rebuilding Xen Tools
Posted by: brettstahlman - 10-26-2017, 04:25 PM - Forum: Getting Started - Replies (2)

Incidentally, what's the recommended way to get rebuilt xen tools libraries into the dom0 rootfs? I added some debug code to the libxenforeignmemory source, but it had no effect, apparently because the dom0 rootfs is populated with pre-built binaries. I'm sure I could use losetup/kpartx etc. to copy them in manually, but this would be tedious, especially if I have to do it often. I'm hoping there's a more automated way...

One other build issue I've run into now that I'm building Xen proper with Petalinux and Xen tools manually (make dist-tools) is that the CONFIG_EARLY_PRINTK=ronaldo recommended in the XZD UM seems to cause multiple definition errors (e.g., on early_puts) when I attempt to rebuild Xen proper after having built the tools manually. I tried setting CONFIG_EARLY_PRINTK=ronaldo when building Xen proper also, but that didn't fix it. Ended up having to do a full clean to get Xen building again...

Brett S.

Print this item

  SIGBUS: "ttbr address size fault", possibly caused by .dtb issue
Posted by: brettstahlman - 10-19-2017, 05:04 PM - Forum: Getting Started - Replies (8)

I've written a small app, designed to run in dom0, which uses the Xen "foreignmemory" interface to read arbitrary pages within a user domain. The call to perform the memory mapping succeeds, and a pointer to the mapped buffer is returned by xenforeignmemory_map(). But I get a SIGBUS as soon as I attempt to access the data in the buffer:

(XEN) traps.c:2508:d0v1 <register values...>
[    62.1234413] Unhandled fault: ttbr address size fault (0x92000000) at 0x0000007fxxxxxxxx
Bus error

Note that the timestamped "Unhandled fault" message is actually from the linux kernel (mm/fault.c). I suspect the "address size fault" occurs because something is telling Xen that 0x7fxxxxxxxx is an invalid (too large) address. Looking in $(RELEASE_DIR)/dts/xen-zcu102.dts, I see the following:
    memory {
        device_type = "memory";
        reg = <0x0 0x0 0x0 0x80000000 0x8 0x0 0x0 0x80000000>;

...which, IIUC, defines 2, 2G memory regions: one at address 0, the other at address 0x800000000. Adding 2G to the start of the upper range gives an end address of 0x880000000, which is significantly below the address of the buffer I'm attempting to access. Accordingly, I tried modifying the memory device entry to increase the sizes from 0x80000000 to 0x8000000000, which should have placed the offending address within both upper and lower ranges, but the error persisted. Is there another DTB file I would need to modify? The one I modified is the one that gets copied to the sd card's boot partition, which I'm assuming is used by the Xen kernel, not Dom0 itself. I just looked at zynqmp-zcu102.dts (under $RELEASE_DIR/components/linux-kernel/xlnx-4.6/arch/arm64/boot/dts) and saw that its memory node looks identical to the one above, so perhaps that's the problem. But if so, it seems odd that mmap() (used by xenforeignmemory_map() to allocate the buffer) would return an address outside the default ranges defined in the kernel's device tree. Isn't the kernel supposed to use the information in the device tree to configure its memory management? As a test, I allocated a buffer with malloc(), and it was placed at 0x3xxxxxxx, well within the configured limits. (Unfortunately, xenforeignmemory_map() doesn't allow you to pass a pointer to the desired buffer the way mmap() does...)
Is the device tree the likely cause of the SIGBUS error? If so, is changing the memory node's "reg" property the right fix, or does the fact that mmap() returns an address above 4G point to a problem elsewhere?
Brett S.

Print this item