In my introductory post I referred to complexity theory, which can broadly be described as the study of interactions. Complexity theorists recognize there is great value in simulating the interactions of complex systems. This is largely because any large number of interacting elements presents difficult problems in determining the interactive principles that drive collective behavior. In these kinds of systems there are often many factors that affect collective dynamics, and it is very difficult to control and isolate these factors for empirical experimentation [1].

To
the rescue, however, are computer simulations. A computer simulation is a
powerful tool for tweaking the parameters by which it becomes possible to
understand the interactive principles underlying collective dynamics. I have learned
this first-hand in developing computer models of peloton dynamics.

Computer
simulations fall in the domain of “computational models”. Computational models contain computer algorithms
(computer programs), carried out by the massive data processing capacities of
computers [2]. I think of them as a kind
of hybrid between a purely mathematical model in the form of math equations,
and empirically observed behavior.

Computer
simulations are a means of creating data that cannot be easily sourced in the real
world. Sometimes simulation data, or the behavior of the simulation, does not
reflect real world processes very well. But in that case you can continue to
add or omit algorithm parameters, or tweak existing ones. By this process, you can
make predictions about real world behavior. If your simulation reasonably
matches the real world behavior you are modelling, then you have learned
something fundamental about the principles that drive real world behavior. In
this way the process of computer simulation can be a rich source of discovery
and satisfaction.

Although I have no special mathematical training, by creating peloton computer
simulations, I’ve derived a few mathematical equations for processes
that I may not have done without the “heuristic” tool, or trial and error
process, that computer programming allows. Incidentally, this has also led me to understand better how mathematical equations are
derived in the first place: by gathering data and finding patterns and
relationships in that data.

That said, the equations I’ve developed may not be the best ones for the peloton behavior I’m trying to model. Improvements on a number of levels may still be made. However, one wonderful thing about computer simulations is that once your simulation shows behavior that reasonably reflects real world behavior, the simulation itself validates your algorithm. Even if you have trouble articulating what you have developed, a working simulation transforms your intuitions and speculation to the realm of solid evidence, reproducible results and testable predictions.

So, on that note, here are some things I’ve worked up.

That said, the equations I’ve developed may not be the best ones for the peloton behavior I’m trying to model. Improvements on a number of levels may still be made. However, one wonderful thing about computer simulations is that once your simulation shows behavior that reasonably reflects real world behavior, the simulation itself validates your algorithm. Even if you have trouble articulating what you have developed, a working simulation transforms your intuitions and speculation to the realm of solid evidence, reproducible results and testable predictions.

So, on that note, here are some things I’ve worked up.

**The relationship between speed, power output, and drafting benefit**

I've started by looking at the relationship between cyclists' power output, their speed, and the power output reductions (energy savings) caused by drafting. Without getting into too much detail here, this leads to a basic relationship between these elements, one that I refer to as the "peloton-convergence-ratio" (PCR). I have a couple of equations for this, but this one is the better of the two [3]:

Where PCR is the Peloton Convergence Ratio;

*P*is the power output of the front rider at the given speed and equals the power output required by the following, drafting rider, to maintain the speed set by the front rider

_{qfront}**if the**

**following**

**rider were not drafting**(hence “required output”);

*P*is the maximum sustainable output of the following rider.

_{maxdraft}
The equation expresses what all cyclists know: a drafting rider can match the speed of a front rider even at speeds that would exceed the drafting rider's capacity if the front rider were not there to create the drafting zone for the following rider. When PCR exceeds the value 1, that is the point at which a front rider and a following rider separate, or "de-couple". One nice thing about this equation is that it incorporates speed, which determines drafting magnitude, while also differentiating between speed and power output. Power output, of course, is independent of speed, and PCR accounts for this fact, and encompasses hill-climbing situations in which riders decouple at lower speeds. This is shown in the graph below. The horizontal line at PCR 1.0 shows the de-coupling point between cyclists as their speeds increase, shown on different incline gradients.

**Figure 1.**

PCR, as a ratio, is fundamental to my computer simulation. My approach has been to create an algorithm that includes PCR as a manually adjustable parameter; i.e. a ratio between 0 and 1 (or a bit over 1). Adjusting PCR up means that coupled cyclists' capacity to pass each other falls; conversely, at lower PCR, cyclists can pass each other freely. This is a simple point to make, but it is very important in modeling peloton behavior. With this basic information, we have the foundation for an algorithm that generates realistic peloton behavior.

**The passing principle**

**Here is where it gets dicey. If we start with the premise that passing gets more difficult as cyclists' collective power output increases, then we can simply use our adjustable PCR "knob" to adjust the power output value up or down, and at low PCR, or power output levels, cyclists pass more freely and easily; at higher PCR, or power outputs, passing takes longer and happens less frequently in the peloton as a whole. This suggests there is some some constant, low-value, baseline measure of distance, time, or the number of cyclists that one cyclist can realistically travel, or pass, while passing others in one go. Presently, I don't have a realistic representation of this constant.**

The absence of such a constant is not the end of the world though, since my simulation is designed not necessarily to be highly accurate, but to demonstrate that certain principles, like the passing principle, are real; that these principles can be modeled and result in some realistic behaviors.

So we can begin with some rough eye-ball base level passing parameter, which we then can adjust by changing PCR as a fraction of 1, and develop a working equation for the passing dynamics. At this point, since a paper of mine containing more details is under review by an academic journal, it is prudent that I not state my equation straight out. In addition, since submitting that paper I've modified the equation so that it now describes cyclists' passing times in a modified T = D/V (time = distance / velocity) form. This recent modification may be something the reviewers will ask me for in any event [4]. I will add, however, that my model also incorporates and modifies elements of Uri Wilenski's Netlogo flocking model [5], which I have found are necessary components for a realistic peloton model.

Nevertheless, I feel comfortable showing a few basic results.

For example, the 3D graph below (Figure 2) shows all the cyclists' passing times for my equation in modified T=D/V form, after plugging in a passing constant and PCR values into my equation over a range of incline gradients (hills). Looking at the vertical y-values ("passing time (seconds)"), we see that time required to pass increases as peloton speeds increase (z-axis), and at lower speeds as you climb hills (x-axis). It is more or less a 3D version of Figure 1, except that Figure 2 indicates passing times, and contains values that are applied in my Netlogo peloton simulation.

**Figure 2.**

So, essentially my Netlogo model computes all the values shown in Figure 2 and computes them for any number of simulated cyclists in a peloton. Combined with the modified Wilenski flocking rules, as I noted above, we see the emergence of chaotic (apparently) oscillations in peloton length (distance from front rider to rear rider, shown by the red line), particularly at mid-range PCR levels, as shown in Figures 3 and 4.

**Figure 3**

**.**

**Figure 4.**

If we look at the simulated cyclists themselves, the peloton length oscillations (red line) shown in the graphs above, occur in sequences that are rather like the examples shown in the shots below.

*Above, the peloton is together in a single goup.*

*Above, the peloton has split, and the two groups are heading generally in different directions.*

*Above, the lower group is more stretched than the one above.*

*Above, the peloton has split into three groups. In this sequence, the split at the left of the view was temporary, as the group quickly reintegrated.*

So, there are some basic elements of my peloton model. In its basic form it is not much different from a couple of my previous versions, but this one is perhaps the best in terms of its having incorporated an equation that may be translated into actual measurable values; i.e. the times required for cyclists to pass others as they increase their collective power output. In this way, this model and its algorithm may be checked against actual data acquired from pelotons. Despite the abundance of riders using power-output meters and other handy technologies, my own experience is that it is not easy to gather this data for a whole peloton, particularly on a small budget. Still there are a few relatively inexpensive ways to gather data that can go a long way to increasing our understanding of self-organized peloton dynamics. More for another post.

**Notes and references**

[1] See for example, Phillip Ball’s description, in “Why Society is a Complex Matter” 2012 Springer Verlag, Berlin, at p. Xii “Not only do outcomes often depend on a host of different contingencies, but sometimes there may be too much variability in the system – too sensitive a dependence on random factors – for outcomes to be repeatable. “

[2] http://en.wikipedia.org/wiki/Computational_model

[3] Note: this equation is contained in a paper of mine under review for possible publication; however, before submitting it I had earlier mentioned it relation to a Netlogo model I had uploaded to the Netlogo Community Models website, so it was already in the public domain.

[4] Acknowledgement to Ashlin Richardson for encouraging me to modify my equations/model to be in this form, or similar form.

[5] Wilensky, U. (1998). NetLogo Flocking model.http://ccl.northwestern.edu/netlogo/models/Flocking. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

[3] Note: this equation is contained in a paper of mine under review for possible publication; however, before submitting it I had earlier mentioned it relation to a Netlogo model I had uploaded to the Netlogo Community Models website, so it was already in the public domain.

[4] Acknowledgement to Ashlin Richardson for encouraging me to modify my equations/model to be in this form, or similar form.

[5] Wilensky, U. (1998). NetLogo Flocking model.http://ccl.northwestern.edu/netlogo/models/Flocking. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL.

## No comments:

## Post a Comment