Page 1 of 1
Data transfer rate with different interfaces
Posted:
Thu Sep 03, 2015 6:04 am
by krmld
Re: Data transfer rate with different interfaces
Posted:
Thu Sep 03, 2015 3:03 pm
by aolofsson
Re: Data transfer rate with different interfaces
Posted:
Fri Sep 04, 2015 5:00 am
by krmld
Re: Data transfer rate with different interfaces
Posted:
Fri Sep 04, 2015 1:04 pm
by sebraa
Using the current images, I did a few measurements.
I run a direct connection between an Intel GBit Ethernet and the Parallella, and ran the following commands:
on the laptop: "dd if=/dev/zero bs=1M | pv | nc parallella 12345"
on the parallella: "nc -lp 12345 | pv >/dev/null"
I got about 40 MiB/s, with the Parallella being 75% busy (1 core netcat, 1/2 core pv), using the newest image on a stock configuration.
Re: Data transfer rate with different interfaces
Posted:
Fri Sep 04, 2015 2:15 pm
by aolofsson
Re: Data transfer rate with different interfaces
Posted:
Fri Sep 04, 2015 6:21 pm
by sebraa
Using iperf, I get about 520 MBit/s (laptop as server), or about 580 MBit/s (Parallella as server).
When testing both directions at once (-d parameter), I get about (35 / 187) MBit/s (laptop as server; but differing widely), or about (380 / 120) MBit/s (Parallella as server).
Again, this is on a stock configuration using the 3.14.12-parallella-xilinx-g40a90c3 kernel.
Re: Data transfer rate with different interfaces
Posted:
Sat Sep 05, 2015 3:55 pm
by ajtravis
Re: Data transfer rate with different interfaces
Posted:
Sun Sep 06, 2015 9:49 am
by tnt
If you want to improve performance a bit :
- Use larger MTU on network
- Use more aggressive DDR timings (the default ones are _very_ conservative, way under what the components are spec'd for).
- Use a memcpy library with optimized NEON code
Re: Data transfer rate with different interfaces
Posted:
Mon Sep 07, 2015 8:29 am
by sebraa
I only wanted to see whether the extremely low numbers reported in the first post are realistic. I don't care too much about getting higher numbers. My workloads are limited by other means, not by the network interface, and the only Epiphany cluster available to us uses a 100 MBit/s switch (don't ask me, I didn't build it...).
The dd/netcat combination was chosen since it is not a benchmark, but a somewhat more realistic workload (that is, both cores working together to produce data elements).