Seagull Performance Tuning Parameters
You should be able to push up the rate of calls that Seagull can generate with proper configuration, and in this article I’m going to detail some of the parameters and other options that you can play around with to get the maximum performance out of your Seagull installation.
I’ll state the obvious right up front, and that is that your hardware needs to be up to the job in hand.
I’m using a reasonably capable 8 core machine using Intel Xeon CPU E5-2699 v3 @ 2.30GHz running CentOS Linux, but Seagull seems to run well and be capable of generating high traffic call rates on even quite modest hardware.
So let’s start by tuning the core Seagull configuration. There are plenty of parameters to play around with in the configuration file which can be located here:
The Key Seagull Parameters to Check
The most important parameter I have found is:
The select-timeout-ms parameter defines the value of the timer set when listening to the system waiting for messages, so try lower values to achieve higher call rates.
The next parameter to check is:
Make sure that max-simultaneous-calls has a high enough value as this parameter defines the maximum number of simultaneous calls that Seagull can setup.
The maximum number of simultaneous calls is calculated using the following formula:
(Duration of a call * call rate)* 1.2
I wasn’t sure of the call duration, which presumably would not only depend on the call scenario but also the system latency, so I simply set it to 50000 which is a value I figured would be more than adequate for my application.
Next, check the following parameter:
The call-timeout-ms parameter defines a timer after which closes the call if it gets “stuck” for any reason.
If this happens then the call will be closed and marked as failed.
I set call-timeout-ms fairly low, my reasoning being that if the call is stuck then I don’t want the system to tie up resources – let’s just close the call quickly and move on to the next one.
Here’s how all the above parameters look in my conf.client.xml file:
<define entity="traffic-param" name="max-simultaneous-calls" value="50000"></define> <define entity="traffic-param" name="select-timeout-ms" value="5"></define> <define entity="traffic-param" name="call-timeout-ms" value="5000"></define>
A final note, the Seagull documentation states that the max-send and max-receive parameters are no longer used, so I leave both at their default values.
TCP Stack Settings for Seagull
By default, the Linux TCP network stack is not configured for high speed or large file transfer across network links and the settings are usually on the conservative side to save memory resources.
We can tune these networking parameters by making changes to the /etc/sysctl.conf, so let’s take a backup of that file first just in case we need to restore the original values in future:
cp /etc/sysctl.conf /etc/sysctl.conf.BAK
Warning: Please note that changing the following settings to the values shown is going to increase memory usage on your server. If memory is an issue for your particular installation, then proceed with caution!
First, set the max OS send buffer size (wmem) and receive buffer size (rmem) to 12 MB for queues on all protocols
echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf
You also need to set minimum size, initial size, and maximum size in bytes:
echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf
Turn on window scaling which can be an option to enlarge the transfer window:
echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
Enable timestamps as defined in RFC1323:
echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf
Enable select acknowledgments:
echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf
By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions.
Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.
echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf
Now reload the changes:
You can review the actual values of the settings by typing the following commands:
The default and maximum amount for the receive socket memory:
cat /proc/sys/net/core/rmem_default cat /proc/sys/net/core/rmem_max
The default and maximum amount for the send socket memory:
cat /proc/sys/net/core/wmem_default cat /proc/sys/net/core/wmem_max
The maximum amount of option memory buffers:
Thanks to Vivek Gite for this information in an excellent article on Linux TCP Parameter Tuning.
There is also more information on TCP and NIC tuning to be found in this article on Linux TCP Tuning.
Seagull Traffic Models
Seagull generates traffic using different model types, so try playing with these to see if it has any effect on your call rate.
To be honest, I tried all 3 settings and decided to stick with the default Best-effort setting for my use scenario, but it’s another possible tuning option that may help you to achieve higher call rates for your particular application!
- Uniform: for each interval, seagull tries to reach the expected call rate, regardless of what happened during the last interval. With this value, the max-receive and max-send options are automatically set. It is not recommended for a low call rate. To reach a high call rate, it is necessary to increase the call-rate slowly (with the keyboard control or the remote control) to avoid a burst phenomenon.
- Best-effort: seagull tries to maintain the expected average call rate by adjusting the instantaneous call rate using the rates reached during the previous intervals
- Poisson: the real call rate varies around the expected call rate according to the Poisson distribution
For more information see the Seagull Core documentation on the Seagull available traffic models.
Reducing Seagull Stats, Display & Logging Overhead for Improved Performance
Hopefully, with Seagull configuration and tuned for your system and enhanced TCP settings, you will achieve the call rates you need.
But if you’re still not there and want to try and squeeze out a little more performance, then you can try reducing Seagull overhead further by modifying some of the logging levels and information displays.
You can give Seagull more time for IO instead of displaying stats by changing the following parameters:
Here is a possible configuration:
<define entity=”traffic-param” name=”display-period” value=”8″></define>
<define entity=”traffic-param” name=”log-stat-period” value=”8″></define>
Reduce Log Levels
The logging feature of Seagull provides several logging levels that can be combined, so try running Seagull with only a minimum set of logs.
The log level is specified at the command line using the -llevel option, which can be defined in the run script located in the following directory:
For example -llevel EWT logs Errors, Warnings and Traffic events.
So to increase performance try removing the logging of traffic events by specifying -llevel EW.
Ultimately, you could turn off all logging using the -llevel N option in the run command.
Also, by default, all log entries are time-stamped which is costly in terms of CPU time for the test tool.
These time-stamps can be disabled by using the “-notimelog“ command line option when launching the tool.
For more information see the Seagull Core documentation on Logs and traces.
Seagull Background Mode
Finally, you can run Seagull in background mode which will disable all real-time display in the console.
This can be done by using the -bg option at the command level which, again, can be defined in the run script.
For more details, see the Seagull Core documentation on Command Line arguments.
What does the Seagull “Flow control not implemented” message mean?
When you see this error, it’s because Seagull got an “EAGAIN” error from the TCP protocol stack when trying to send a message.
This EAGAIN error occurs when Seagull cannot send any more data (for example because the TCP buffer full): the TCP stack asks the application (Seagull) to try again later. But this mechanism (which can be called “flow control”) is not implemented in Seagull.
Assuming you have already tuned your TCP parameters, then a possible cause could be that the remote application (to which Seagull talks to) cannot read the incoming data quick enough, which causes TCP stack queues on which Seagull is relying to be overloaded.
If anyone has successfully implemented Seagull flow control then I would love to hear from you – it seems the original developers planned to introduce this feature but never got around to it before development was stopped!
Well, that about wraps it up for this article! If you do have a go at tuning Seagull then I would love to know how you got on and what made the most difference for you and your unique setup.
I will keep this article updated with any new information that becomes available on how to get the maximum performance out of Seagull, so please share your experiences and knowledge.