Archive for the ‘jmeter’ Category

modulating the throughput in JMeter for better longevity stress tests

Thursday, September 2nd, 2010

When running a longevity stress test with JMeter (a test which runs for many days) you may need to emulate a load which approximates the real traffic that the site is receiving in production. And that is definitelly not a steady and constant load during the duration of the full 24 hour cycle.

Most normal sites (not twitter or facebook) tend to receive different amounts of traffic during a day. Although it depends on the nature of the site, usually the traffic will look like a sine wave with a wave length of 1 day. Even if it doesn’t look as smooth as a sine wave, a sine modulating throughput will be much better than testing with constant one. Having a constant throuput can mess up with the data you receive from the test since the application, db and o/s level caches and other systems of the stack (e.g the GC) may tune to the specific constant throughput.

So, first of all we need to setup some variables in the JMeter test.
JMeter variables setup
Setting oscillationsPerDay to 1 is what we want.

Next we setup a Constant Throughput Timer to reference the hitsPerMinute variable. Note that the initial value of this variable doesn’t play any role since we’ll be constantly changing this via a bean shell script.
JMeter Constant Throughput Timer

Lastly we need a BeanShell PreProcessor with the following script:

// variables
double minHitsPerSec = Double.parseDouble(vars.get("minHitsPerSec"));
double maxHitsPerSec = Double.parseDouble(vars.get("maxHitsPerSec"));
double oscillationsPerDay  = Double.parseDouble(vars.get("oscillationsPerDay"));

// calculation
double oscillationFrequency = 1000L * 60 * 60 * 24 / oscillationsPerDay;
double range = maxHitsPerSec - minHitsPerSec;
double hitsPerSecond = Math.sin(System.currentTimeMillis()/oscillationFrequency*(Math.PI*2))*range/2+range/2+minHitsPerSec;

// set
vars.put("hitsPerMinute", String.valueOf(hitsPerSecond*60));

// log
log.info("throughput: " + hitsPerSecond + " hits per second, or " + vars.get("hitsPerMinute") + " hits per minute");

So this will generate a load which will modulate from minHitsPerSec to maxHitsPerSec for as many times per day you need. Of course, you can make the load and requests behavior more realistic by adding a Random Timer.

Tomcat vs JBoss Web

Wednesday, March 14th, 2007

JBoss Web is a web server and servlet container at the same time. It’s promise is that it can serve static and dynamic content, very fast, without the need of an Apache HTTPD fronting it. If that’s true, its party time, and I personally live for the day where it will be easy to get Java 5 enabled hosting for ~5USD/month (as it is the case today with LAMP stacks).

JBoss Web uses APR and native extensions in order to achieve better utilization of the resources of the O/S. Note that APR is also available for Tomcat now.

I’ve decided to give JBoss Web a try, locally, and stress test it against a regular Tomcat. Note that what I did was done for pure fun (and out of curiosity). I do not own a lab, I am definitely not a stress test expert and I do not understand many things at the low level (I/O, threads etc).

Test info

  1. JMeter was used and it was running on the same machine with the servers tested.
  2. During tests JMeter would use ~30% of cpu, and the server would consume the rest ~70%.
  3. O/S was Windows XP SP2 on an AMD64 3000+ with 1.5GB ram.
  4. Java 1.5.0_06 on -server mode for both servers.
  5. Default installations of JBoss Web 1.0.1 GA and Tomcat 5.5.23 were used.
  6. -Xms and -Xmx settings were not altered. Don’t think it mattered.
  7. I stress tested 10 URLs of a very small webapp with a front controller delegating to cached freemarker views. No logging, no persistence or database calls. JBoss’ CONSOLE appender’s threshold was changed to FATAL, to avoid any logging output which would slow down things. The most interesting operations in the webapp would be the GZIP filter, and multipart request using commons fileupload.
  8. Warm up of the servers was performed. I found out that even for small amount of concurrent threads hitting the server, if these all start immediately, it’s most likely you’ll get some 500s at the beginning. The warm up would be anything between 2500-5000 requests until the server throughput was stabilized.
  9. When the server was warmed up, I would get my sample from the next 5000-10000 requests.
  10. The “threads” column in the results table, is the amount of concurrent threads which where hitting the server.
  11. An http cookie manager was used on JMeter, so 10000 sessions were not being created.

Results

threads Tomcat 5.5.23 JBoss Web 1.0.1
50 95 requests/sec 88 requests/sec
75 105 requests/sec 95 requests/sec
100 123 requests/sec 100 requests/sec
125 75 requests/sec 104 requests/sec
150 110 requests/sec
at this point I had to increase the maxThreads
110 requests/sec
200 62 requests/sec 97 requests/sec
300 115 requests/sec 108 requests/sec
400 n/a
at this point JMeter would block.
[25 seconds per page]
80 requests/sec
500 n/a 75 requests/sec
600 n/a 84 requests/sec
700 n/a 55 requests/sec
[10 seconds per page]
800 n/a 48 requests/sec
[13 seconds per page]
1000 n/a n/a
at this point JMeter would block

Findings

Even this test can be considered rudimentary, JBoss Web looks very good. The biggest problem with the whole procedure is that JMeter was on the same machine as the servers. JMeter supports Remote Testing and Distributed Testing which would have produced more accurate results.

In any case, it was fun.