[ProgSoc] Predict Sydney traffic?

sanguinev at progsoc.org sanguinev at progsoc.org
Thu Dec 16 10:08:06 EST 2010


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 15/12/2010 11:32 PM, John Elliot wrote:

> Cool. It's amazing how CPU intensive it is. I completely re-wrote the
> genetic algorithm platform to use performant data structures (arrays
> instead of linked lists, and sealed classes rather than interfaces) and
> still it takes ages. I've been running it for a few days and have only
> done 38 generations (of ~150 strategies per generation).

Do you have a version that can run on linux with a typical C/C++/etc.
compiler?

> Good luck with the python port, I'll be interested to check it out.
> 
> For now there is one particular problem I have that has me stumped,
> which is really annoying because I'm sure it's a trivial problem really.
> What I'm trying to do is build a model where if there is data within say
> the last n minutes (where n might be 60 or 120, or whatever) then the
> most recent reading will factor significantly in the results, whereas if
> the most recent reading is too far away (i.e. more than n minutes away)
> then it won't factor so significantly as a result. So if you're trying
> to predict 15 minutes into the future then the present reading is highly
> relevant, whereas if you're trying to predict 24 hours into the future
> then the current reading isn't likely so relevant as say a weekly
> average. Now the simple way to do this would be to do something like,
> 
>   if ( n > 120 minutes ) {
> 
>     last_reading_weight = 10000;
> 
>   }
>   else {
> 
>     last_reading_weight = 0;
> 
>   }
> 
> But what I'd like instead (or "as well" I should say, because it's no
> trouble to trial each model) is a function (or maybe several functions
> could be trialed) that takes n and turns it into a weight in a more
> continuous fashion, where maybe I'd get readings like,
> 
>   f( 0 ) = 10000
>   f( 15 ) = 9000
>   ...
>   f( 60 ) = 1000
>   f( 120 ) = 500
>   ...
>   f( 1440 ) = 0.001
> 
> It would be ideal if the function f also took a random floating point
> value that modified the distribution while keeping the lower and upper
> bounds relatively in tact. Can anyone think of such a function?

Something like:

float f(float mins)
{
  if (mins < 1)
  then {return 10000;}
  else {return ((10000/mins)*random());}
}

Where random() is a float in the range 0 to 1.

Obviously some other adjustments need to be made to balance the
weightings to match your desired results or reduce the influence of the
random value. But the basic shape seems to be what you want.

- - SanguineV
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)

iQIcBAEBAgAGBQJNCUpWAAoJEI+NvFGSwhPn7ioP/iuiHT75Yvkaz3SJuOfHm1Oe
WTMg7C1Wr9OoVPu2P6Th+hG5Ak/Dj5oWcF151FlLjTqg3VrHbXyixwqqQwP+DFo5
nBo3XrARQcFeTI/QwiT4j2K3CHol5F5C83qf/KUHKyST1CAHIlp1m/cThfW9CXBA
hW4PKOsoAKERDvq2PUdsBdrauakXbP2Xg6ZqYB/67PoTOPj+0wR3EGR7Yb9vTSSy
z+eqw4qofnMTGKLW8EXg0hawYNVmD1Y0F93P/WSfcIA94C8dWv/0eIaF4FTxAFBC
2hTkbHZcpIUaCOXeg8MJMBwal714UAt1WN2XSO2oT7yDvVJUn32sSSH9IRarRJ4B
85n3IL4m5yz4Ut8nHWhM9atEr6rHsUzO++SA0C4Nt1zIVfx6gdV7OEBXn1uiYvBn
uUnmwaCTORN0uuPORhfnvsFFSpDKL4viXdvpKok0DOaVQXEeKHAQHoXhttMqxr0n
vEHWTL+wvoNEZQm9wQMM+p65MjzcwZUzJr1vV8xXDvOctPwBlyCyj7cPifsDWqRF
dKfJ3Kj1RlCZp2PU5qfPe8h4DnK1Pig9JXYhXqeKUnXXvuDv5QRrbeUmrBXUjU19
vGkB2ZZolvY6Cv+lLOc9cKLHFm+9gkw/CEbEi9bsM505t4REGRiHFPX3+pMGpsBY
itlgtGcSWwDTUuuKUPCl
=WOvZ
-----END PGP SIGNATURE-----



More information about the Progsoc mailing list