stephenbrooks.orgForumMuon1Q&AProgressively Mutating Solenoids
Username: Password:
Search site:
Subscribe to thread via RSS
Mike Malis
2003-06-16 12:26:38
I was wondering about the selection of the solenoids after each progressive computation.  Are they selected randomly each time or are they based on the previous data runs?  Also I was wondering if a version of this program can be written such that each successive solenoid was logically inserted after a certain number of muons exited the last solenoid.  This, I would think, would produce higher yeilds of muons with higher energies due to the fact that each successive solenoid would be "optimized" for the current beam trajectory and energy properties that lies just before it.  Each solenoid would be based upon calculations done at a specific point where the program would pause and calculate for the beam before inserting the next "optimized" solenoid.  Then, with this basic set of rules applied, the only random parameter would be the initial conditions which would follow a logical style "butterfly effect" such as in chaos theory.  The initial conditions would be manipulated more and more accurately as they are correlated with the end results in a feedback pattern that would act like an iteration equation and the highest yield and energy would be the only direction that the program could follow due to this feedback.  Is this possible to program?

Mike Malis
Stephen Brooks
2003-06-16 12:34:22
What you are saying does by coincidence work for the current thing we are optimising (I think) because there are no later restrictions on the design.  In an earlier project we optimised these solenoids with a chicane of magnets on the end that performed a beam-compression in the longitudinal axis.  What you are suggesting works so long as the optimum design for the first N components is the optimum for the whole machine.  But when there are other devices on the end (as in the real world), the only way that I can see to go is 'wholistic optimisation' where entire designs are simulated as one piece and then evolved.

One thing that is similar to what you are saying, and which I will use later on when we get to problems that are too hard to score by beam-transmission alone (i.e. in 99.99% of the designs, zero particles get all the way through) is a modification of the scoring system whereby some designs, in which insufficient particles reach the end to constitute a sensible score, are scored by some sort of "99th percentile transmission distance" down the accelerator, i.e. a length.  So in that way the first stage of the evolution will be for the beam to feel its way through the initial parts of the accelerator.  But typically once it has reached the other end, more changes will have to happen all the way along the machine to maximise the overall yield.

Today's weather in %region is Sunny/(null), max.  temperature #NAN°C
Mike Malis
2003-06-16 13:16:37
What type of beam profile is required, or desired, at the end of the accelerator?  If the design I mentioned above, minus the feedback, was implemented then you could take a linear "sweep" of initial conditions with a maximal spacing so as to not obscure tangentinal type fluctuations and the results could be mapped.  From this initial map, which may require a few thousand data runs but each point run say 10X, you could find the properties which are desired.  Then, on a later version, those desired areas could be refined by inserting the the initial conditions which created those desired results with the combination of the feedback.  There may be several "patterned bands" produced by the initial point sweep that could identify where, when and how the beam changes in its final state due to microscopic changes in its initial condition.  Also, if the final results from the initial sweep produce a pattern that has a fractional dimention that can be interpreted, then areas not covered in the initial sweep may become target areas.  The desired initial conditions, as determined by the initial mapping and fractional dimention analysis can be sent out to the individual users of the program and calculated.  This would provide an UNDERSTANDING of what the beam is doing over time and why which may lead to a breakthrough in the geometry behind particle accelerators, including the geometry of the magnetic and electric fields required to produce certain effects using multiple-pieced accelerators with each piece having different properties.  What did you mean by 99th percentile?  That completely went over my head.

Mike Malis
Mike Malis
2003-06-16 13:35:54
The above thread would follow the before-mentioned 'wholistic optimisation' after the feedback was implemented but the initial sweep would be chaos at its best.  Wink

Mike Malis
Stephen Brooks
2003-06-16 14:31:55
The output beam distribution we need will typically have to fit into the 'acceptance' of the next part of the accelerator.  I.e. the region of particle offsets and velocities which will be treated correctly by the next part of the accelerator.  In fact there is a sense in which the best way to calculate the acceptance is to just carry the simulation right on through the next part of the accelerator to see what happens.  That is, in part, what I was going to do, just simulate the later events so that the optimisation of the first parts is done to produce a beam that works well with the later ones.  As a _final_ output, which will have to go into the muon acceleration system and then the storage ring, physicists go by a quantity called the "emittance" of the beam, which is a bit like a 'temperature' in some ways.  A high-emittance beam can only be compressed into a small volume if the particles then have a very large variation in velocities, and the velocity variation can only be made small if the beam's physical size is large.  The goal is to get as many of the particles as possible inside a certain emittance, or equivalently if the physical apertures in the accelerator cause particles outside that emittance to be lost anyway, we just want the biggest possible transmission.  Which is why I'm currently scoring by transmission alone.

The thing about 99th percentile distance is just that I thought, if there are a lot of accelerators that score 0% transmission, a good way to compare them is to see in which one the particles generally travel furthest.  So I'd look at the 99th percentile value in the longitudinal distances the particles travel before they hit something.  Then perhaps the score could be minus the number of metres from that point to the true end of the accelerator, until a sufficient number of the particles finish, in which case we go to scoring by (nonnegative) transmission percentages.

Today's weather in %region is Sunny/(null), max.  temperature #NAN°C
[DPC]Stephan202
2003-06-16 14:35:41
Stephen meant, that in certain complex simulations none of the particles reach the end of the machine.  As configurations improve, the point where the last particle is lost moves further to the end of the machine.  Eventually a configuration will be found in which some particles DO make it through all the way.
The 99.99% was just an example-result of an almost-complete calculation (complete being that all solenoids have been used in the calculations).

[I'm no Englishman, so Stephen, correct me if I interpreted this wrong]

Edit: I was too slow with replying Razz

---
Dutch Power Cow.
MOOH!
Mike Malis
2003-06-16 15:32:31
They both sound exactly the same and correct to me.  That is a good idea.  If that works then one of those may actually become the most optimal configurations over many metamophosis (spelled correct?) but I would only consider that for the ones where the furthest traveled was either really close to the end or really close to the beginning (I don't have a clue why this would be right but it just feels right for some reason).

Mike Malis
Mike Malis
2003-06-16 16:05:14
The emittance and acceptance concepts are very intriguing.  If an already compressed beam (oxymoron?) in the form of a very small 2-dimentional disk at time=100ns were to have a given number of particles with variable energies (velocities) with variable direction vectors and the simulation was run backwards in time, the emerging emittance patterns (optimal for the accumulator ring) would be useful to the development of the actual accelerator design because it would set a goal state for somewhere in the middle of the accelerator.  Then the accelerator solenoids could be worked from both ends to the center and the calculations for a single optimization pattern would be cut in half and the "goal state" could be reached sooner.  In this case the goal state would be exactly opposite for each end of the accelerator depending if you are going foreward in time from the beginning or backwards in time from the end.  This would require optimizing the last half of the accelerator first to provide a perfect emittance pattern in the middle and then optimizing the first half to try and get as close as possible to the middle emittance pattern.  There can be a seemingly infinite number of perfect emittance patterns generated by changing around the last half of the solenoids and running the simulation backwards in time.  Then all the data from any calculation from the first half can be compared by the program to find the best fit out of all of the emmittance patterns generated in the second half.  If you don't want me to suggest anything anymore Stephen you can tell me to stop anytime.  I just find this project a very fascinating challenge and I wish I could help with it more but I don't know a thing about programming.

Mike Malis
Mike Malis
2003-06-16 16:32:43
Note: the "perfect emmittance patterns" produced could only be used with the exact solenoid configuration that produced them.  The very closely matched emittance patterns produced at the beginning would also have to be correlated with the exact solenoid configuration that produced them.  When these two sets of solenoids are placed together then the simulation can be run from the beginning for all of the best paired sets and the final statistics can be reviewed.

Mike Malis
Mike Malis
2003-06-16 17:13:15
Note also: With a linear increase in processing time, there will be a factorial increase in design patterns (i.e. If you take 3 full calculations that would be the same amount of calculation as 6 half calculations but the number of configurations would be 6!  or 6 factorial=720 combinations).  For x half calculations the number of combinations would be (2x)!.

Mike Malis
Mike Malis
2003-06-16 17:15:52
Mistype: For the same computing time of x full calculations you would have (2x)!  resulting combinations.

Mike Malis
Mike Malis
2003-06-16 17:33:31
I'm retarded, It would be x+(x-1)+(x-2)+(x-3)......etc.

Mike Malis
Stephen Brooks
2003-06-16 17:40:03
I've thought about that sort of "raytracing backwards" concept a few times.  It's not a bad idea.  The real trouble is that what you're tracing backwards is not a load of particles, but rather a distribution, which is somewhat harder to model computationally.  The particles being simulated forwards actually stand for a distribution too, as there are way more than 10^5 particles in the bunch (more like 10^13 I think).  The real trouble is calculating the transmission when those forwards and backwards sampled distributions "collide". For some reason there's no nice statistical way to generate a smooth distribution function out of a set of point samples without making some strong assumptions about its form.  Another trouble-point, which recurs throughout this area of study, is that the goal aperture (the acceptance - usually an ellipsoid - of the next part of the accelerator) is 6-dimensional in nature.  It is over the space of all possible (x,y,z,vx,vy,vz) if you see what I mean.  And 6D objects typically take one heck of a lot of point samples to approximate well.  Another technique is to divide the 6D space into cells with a certain approximation to the distribution in each, but you have 10^6 cells even to get a resolution of 10 on each side, which is not great.  The highest dimensionality that has successfully been simulated with cells in that way is 4D, and that needed a supercomputing cluster.  For 6D we'd probably need the entire Muon1 project just to work out ONE simulation Smile. So currently the simple particle-tracking technique is the almost-unchallenged standard in these simulations, although some codes use approximations to take the particles through contiguous components all in one step.  Muon1 was designed specifically not to do it that way because with approximations come assumptions, which produce inaccuracies...

Today's weather in %region is Sunny/(null), max.  temperature #NAN°C
Mike Malis
2003-06-17 01:20:35
Ahhh......this is the same catch 22 that I have seen in many areas of quantum mechanics.  Currently it is the one thing holding back M-theory (a beautiful type of string theory).  They are using "perturbation theory" to mathematically describe the 10-dimentional spacetime geometry of calabi-yau spacetime.  The "infinities" that have been removed also produce other smaller inaccuracies in the observation.  It all boils down to the fact that the mathematics required for these multidimentional types of calculations has not been discovered yet as it currently takes way too much calculation than is possible for the current mathematics.  I am glad to see that you are getting such good results with the method you are currently employing.

Mike Malis
: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 25163621 accesses.