Radioactive Fallout Model (Radfo) Part I

by Buddy McCloskey     6/06/2008     Contact

4. Flash Coalescing Mini Model:

Due to the volume of this topic, it was thought to put this at the end, since to read this when referenced, would distract from the topics that referenced it. No link was given there but here. Notice, we have the same header and background. If you need to go back to the main Radfo, use the return <= browser's key.

       A. Extrapolate for Better Data?:

Originally, it was first thought that all the dust was being vaporized so the need was to fill in more particles and detail than that shown by the Table 3.6-1 from the Navy Radfo Model. The first column (Particle) is really u/10 so 4 would be 40 u and 14 represents 140 u. A plot of this data shows the Cap and Stem .

As you can see, there is a lot of detail missing between .1 (1u) and 1 (10u) and also between 1 (10u), 2 (20u), 3 (30u), 4 (40u) etc. The three 6.0's seem too rounded off. Our first idea was to determine if this is a normal curve that everybody assumes occurs in nature. Once this was established, we can use the normal curve formula to smooth and fill in more detail. Notice the plot of some normal curves . Maybe the Navy Plot shows just half a normal curve.

       B. Binomial Distribution:

A second thought was to generate a normal curve by the Binomial Distribution formula - or known as the Pascal Triangle for the coefficients (n terms). That is (x + y)^n = x^n + nx^(n-1) y + n(n-1)x^(n-2) y^2 /2! + n(n-1)(n-2)x^(n-3) y^3 /3! ... This exceeded the computer's ability to handle large numbers (1.7e308) and was difficult to work with so was abandoned. Later, we scaled the data and tried to adapt half of a normal curves to the Navy's data, after much time it never fit to our satisfaction. Either the normal curve max areas drifted to the left and the bottom to the right of the data or visa versa.

       C. Coalescing Theory:

Another attempt would be to try to simulate the action that might be occurring in the nuclear cloud as it cools. In clouds, larger raindrops appear by the coalescing of smaller rain droplets. Most literature indicates that this is most evident when there is a convergence of clouds or rain areas. This can be applied here as the air converges towards the stem but as we saw earlier, these are cooler particles that are vaporized by the hot reaction plasma and diverges at the top of the cloud stem. So, even though we have cooling, we have divergence say at the top of the cap cloud as seen in Fig. 1.3-1 again. We will try to generate these larger particles starting out from vaporized 1u dust that has passed through the hot plasma in the stem.

       D. Mini Model Math Components: PDPL.c - 11/26/08

            (1) Dust Volume to Particles:

The first thing we need to determine is how many particles do we have to start with? We started to use the crater dust to work with, then added the city dust to it as explained in the above studies earlier. Now our smallest vaporized particle is 1u diameter sphere or .5236u^3. If we convert the dust volume in 1u spheres, we have 8.4312e25u^3/.5236u^3 = 1.61e26 (np1meg) particles. This is the first np1 in our loops. Now our collection particle is np2, which is our first case = np1 also. After combos, ip2 becomes larger as seen as r2 in conea.gif . However, for our first case, what is the volume surrounding a 1u particle? We choose a sphere again instead of a cube. This makes the math less complex and should make little difference between using 1u^3 cubes or more 1u diameter spheres. This cap volume is 7.790e+30u^3/1.61e26 = 48377.7 u^3, or converted to a sphere with a radius Rv = 22.604u = .0226mm by solving for Rv in 4/3 pi Rv^3 = 48377.7 u^3. Now we have numbers in a range that we can comprehend. This is seen as the outer yellow broken circle at the borders in conea.gif again.

            (2) Collection Efficiency Term:

Using this 1u diameter sphere, we call this np1 and it's radius r1. We now discover our first probability term in collection efficiency . Converting y to srf, this and the diagram conea.gif , we define this as cole=r2/[srf*(r1+r2)] where r1 and r2 are the 2 particle sizes considered and srf is the streamline factor of the larger (r2) of the 2 particles in r2/srf where we estimate that by the drawing, srf is near 1/4 the radius of r2. Actually, we use cole^2 for the final form of the collection efficiency term. The formula is used when r1 approaches r2 or r2 approaches r1, that r1 and r2 are close enough to touch each other if properly aligned as conea.gif indicates.

            (3) Probability of Near Alignment:

The smaller particle r1 can move in any direction by randomness. If it does decide to go in a constant direction - the red line towards the center of r2, it sweeps out an arc as seen by the purple segment on the circumference of r1 and r2 in conea.gif . This is really a cross sectional cut of the two spheres. In 3-D, it would show the surface area swept out by the arc segment in all directions from it's center creating a circle on the surface of the larger circle mathematically called a cap area - not to be confused with our nuclear cloud cap which is really a cylinder. Probably best to think of this as an ice cream cone. The surface area of this cap is given by 2 pi rh. So this segment area to the entire sphere 4 pi r^2 is the probability it would move in that direction. We call this the area segment/area sphere ratio (asr1) and (asr2). For particle radius r is given as 2 pi rh/4 pi r^2 = h/2r. So asr1 = h1/2r1 and asr2 = h2/2r2.

For r2, we need some calculations to find the segment area. The lines that make angle theta connect to the top and tangent to and touching the circle at radius r2/srf. We use this radius because this fraction of the larger particle is the distance that the center of the smaller particle must be within to apply the collection efficiency expression mentioned above. If the center of r1 falls outside this range, the air current streamline of particle r2 fails to capture particle r1. These are the ground rules. With x,y common to both circle r2 and cone angle, we substitute the line equation into the formula for the circle = x^2 + y^2 = r2^2. The line is y=bx+a, but b=tan theta or y/x. With y=bx+a, r2^2 = x^2 + y^2 = x^2 + (bx+a)^2 = x^2 + b^2x^2 + 2(bx a) + a^2 = x^2(1+b^2) + 2bx a + a^2 = r2^2. Collecting x coefficients and constants for the quadratic equation, Ax^2 + Bx + C = 0, A = (1+b^2), B=2ab, C=a^2-r2^2. Solving for x = [-B +-(B^2-4AC)^1/2]/2A, using the + root of ( )^1/2, we find x. Then the small h2, not visible in conea.gif is between r2 and x or h2=r2-x. With theta known, the x1 within the center of r1 (hypotenuse) is r1 cos(theta). Then h1=r1-x1 = r1-r1 cos(theta) = r1(1-cos(theta)).

            (4) Mean Position of Small Particle:

Now the angle theta depends on the x distance Xm along the red line. With much labor, probably unnecessary, we thought is was more accurate to compute this mean angle as the particle travels it's x limits rather than the angle of it's mean x position. This developed into an integration formula as follows:
Integral Arc Cot (x/a)dx = x ArcCot(x/a) + a/2*log(a^2+x^2). There is no Arc Cot in the 'C' language so with Cot = 1/Tan, we switch x/a -> a/x as seen below:
Integral Arc Tan (x/a)dx = x arcTan(x/a) - a/2*log(a^2+x^2).
x0=Rv-r1 - Origin (x=0), on circle r1 = angles --> lim2=x0, xf=r1+r2, lim1=xf, a=r2/srf.
So ang=lim2*atan(a/lim2) + a/2*log(a*a+lim2*lim2) - (lim1*atan(a/lim1) + a/2*log(a*a+lim1*lim1)).
Mean ang mAi=ang/(lim2-lim1) because integrated area average is Integral Area divided by x range between limits. As hinted, these results may indicate minute differences if theta was calculated from the mean x position (x0 + xf)/2 - to be check later? I never did.

            (5) Probability Vectors:

Since we established the probability of directions that each particle must have, let's consider both directions along that alignment with that of speed - the vector. Above, we found that one particle is aligned with the other but along the red line in conea.gif again. Moving both particles at various speeds are seen in ProbVect.dia . The explanation there is sufficient with the diagram. We combine this probability with asr1 and asr2 derived above.

            (6) Budgeting Particles Combination Distribution:

                (a) Particles Size Combination (Combos):

We start out with the particle to be captured (ip1), the smaller particle, and the capturing particle (ip2), the larger particle. The ip's indicate the number of captures or combines of ip1 particles that have occurred. However, the initial particle ip1=1u diameter is captured by it's own ip1 particles because there are no larger ip2 particles to pair with. We have p1 = particle diameter of ip1, p2 = particle diameter of ip2. If we combine these 2 spheres using their respective radii r1+r2, the new volume 4/3 pi R^3 = 4/3 pi r1^3 + 4/3 pi r2^3, cancelling the constant term factors, R^3 = r1^3 + r2^3, R = (r1^3 + r2^3)^1/3. For our first combined, call it combos, we have r1=r2 or both equal r's, so R = (r1^3 + r2^3)^1/3, or R = (r^3 + r^3)^1/3 = (2r^3)^1/3 = 2^1/3 r^3^1/3 = 2^1/3 r^3/3 = 2^1/3 r or 1.26r. So with r=1u, our new volume combined spheres with radius R=1.26u. If r=2, R=2.52. For now, we consider all combos r2's with 1st r1 @ ip1=1. Our 1st combo is then r=1/2u so R=1.26r = 1.26/2 = .63u.

If we work with diameters instead, R = (r1^3 + r2^3)^1/3, r1=d1/2, r2=d2/2 so R = [(d1/2)^3 + (d2/2)^3]^1/3 or R = [(d1/2)^3 + (d2/2)^3]^1/3 = (d1^3/2^3 + d2^3/2^3)^1/3 = [(d1^3 + d2^3)/2^3]^1/3 = (d1^3 + d2^3)^1/3/(2^3/3) or (d1^3 + d2^3)^1/3/(2). With D = 2R = 2(d1^3 + d2^3)^1/3/(2) = (d1^3 + d2^3)^1/3. With the constant 2 canceled, we have R=(r1^3 + r2^3)^1/3, or D=(d1^3 + d2^3)^1/3 the same form. For our combos, we can refer to R or D's. Since we start out with D=1u, we will refer to these combos ip1 & ip2 as diameters.

Using ip1=1u, the first combination is with itself D=(d1^3 + d2^3)^1/3, but d2=d1 so D=(d1^3 + d2^3)^1/3 or D=(d1^3 + d1^3)^1/3 = (2 d1^3)^1/3 = 2^1/3 d1^3/3 = 2^1/3 d1, so ip1=1=d1=1, new diameter D=2^1/3 - same as r's above. Now, if we let this D be d2 and combine with d1 again, D=(d1^3 + d2^3)^1/3. Now d2=2^1/3 d1 so D=[d1^3 + (2^1/3 d1)^3]^1/3 = [d1^3 + (2^3/3) d1^3]^1/3 = [d1^3 + (2 d1^3)]^1/3 = [1 d1^3 + 2 d1^3]^1/3 = [d1^3(1 + 2]^1/3. With d1=1, D=(1+2)^1/3, which is D=(ip1+ip2)^1/3 = 3^1/3 or 3 combined # 1 particles. So using this for the next particle combo ip2=3, then D=(ip1+ip2)^1/3 = (1+3)^1/3 = 4^1/3. With ip's representing whole number combos of 1u particles, then ip1 and ip2 can be any whole integer diameters. If ip1=ip2=2, D=(ip1+ip2)^1/3 = (2+2)^1/3 = 4^1/3 also. A special case is suppose ip2 = 0, meaning no combos of 1u particles. Then D=(ip1+ip2)^1/3 = (ip1+0)^1/3 = ip1^1/3, the original size before the no combo. This is silly but the definition of the ip's must be exact and fit all possible cases.

The importance of the above is that instead of rounding off and working with D=1u,2u,3u etc, we could be more accurate and use 1u,1.5u,2u,2.5u etc. or even more precise particle sizes instead. Now we can really be more accurate and just note the combos (ip1+ip2) times that 1u particle have been combined, and take the cube root (^1/3) for the diameter of this new particle size. We use ip1 and ip2 to represent this or meaning that ip2 identifies particle # 2 by it's # of 1u combos. So the diameters of combos ip1=1 and ip2=1 is D=(ip1+ip2)^1/3. Here we defined ip1=1u. So defining the ip's as the # of combined 1u particles. An ip2=2, indicates ip1=1 and ip2=1 or two 1u particles were previously combined. The ip's=4 means ip1=1 combined with ip2=3 or ip1=2 combined with ip2=2 or 4 combos.

                (b) Storing Combos:

If we were to store the actual radii internally, it would require quite an array with intervals of cut off diameters in floating point decimals. We kept this out of the program by developing an output file that defines the integer combos ip1+ip2 by it's file position filp. We convert the # of particles (npar) to it's base and exponent by 256 binary characters using shift bits as used in most all picture graphics compressions. Otherwise, the internal ram memory requirements would make this program impossible.

As we review in (1) above again, we start out with all ip1=1u particles. This is determined by the crater and city dust programs referenced above. The # of these particles is np1meg0 which is determined by ft^3 --> u^3 of dust then divided by the volume v1 of a 1u diameter sphere for np1meg. For 1 mega @ NYC, this was 8.43123e25 u^3, then divided by the volume v1 (.5236 u^3) yields ip1=1 or np1=np1meg=1.61025e26 1u particles. Therefore, we initialize np1 (# of particles of 1u) to np1meg.

                (c) Combos Loops:

Continuing with the budgeting procedure of ip1=1 & ip2=1 -> npar, we determined npar by the probability mini-model described below. One boundary condition is what will our maximum size particle (ipmax) be? Back to D=d1(1+k)^1/3, or say D=555, with d1=1u, 1+k=D^3 or k = 555^3-1 ~= 555^3 = 1.7095e8. For D=200u, ipmax= 200^3 = 8e6 combos of 1u particles. Now ip1=1 is a bank supply (np1) of 1u particles. When ip1=1 combos with ip2=1, we have a new particle size ip2=2 with npar particles. In this case, np1 particles combined with np2=np1 particles to give us ip2=2 npar particles which is stored in the output file. However, we must take 2 sets of np1 particles from np1meg and put this new value back by the file position for ip1=1. Here, we now take the new ip2=2 as the new np2 to combine with the reduced np1's again to produce the next particle size ip2=3 and so on. So in our loop, we use each ip2 = ip1 --> ipmax to combine with ip1=1 to keep reducing np1meg or np1 (ip1) by the mini model predicted npar particles. For the next combo, we also have to reduce the np2 # of particles stored for ip2=2 to calculate npar to store in ip3.

To best see this illustration, refer to the color coded PDPL.gif diagram showing all the steps by following the line & arrows. To see a step by step procedure of this budgeting with dummy numbers, look at PDPL.dia . This sample is probably the better of the two. Note the asterisk * with np1 @ ip=1. This means, we don't have to store it in it's file position until we complete the ip2 loop (ip2=1 --> ipmax) since np1 is being reduced at each step of the loop.

                (d) Combos Loop Limits:

Do we reduce np1meg to 0? Since many small particles were observed in the cloud cap and stem, let's reduce the original np1meg to a value of np1min by a factor. First we tried np1min=np1meg*1.e-6 and values were still almost identical at np1min=np1meg*1.e-2 or 1% of it's original #. As each successive ip2 becomes larger to combine with p1, it becomes a more effective collector of the small particles as shown above. The probability mini-model isn't sophisticated enough to allow for a greedy ip2 particle, that wants to combine with ip1, as the probability mini-model predicts npar for ip1-ip2 may become more than is available. At this point, we break out of the ip2 loop and start the process again. If we didn't, we would continue with the next ip2 that would be even larger for a more efficient collector with even less particles for the smaller ip1 to combine with. This breaks the chain linking and distributing the larger particles to the smaller particles. The other extreme is that with a small collection efficiency, it would take forever to transfer these smaller particle to the larger and so the chain is actually so slow or also considered stopped growing. To avoid the 1st case break, we code if(npar > np2), we break out of the loop. The next time we reach that same ip2, more np2's have been transferred from combo of smaller particles so may be available to satisfy the greedy collector.

We keep track of the # of these loops by variable 'loop'. This 2nd time through, the np1meg bank of ip1=1 p1's are further reduced as the ip1's are distributed to combine with more ip2's this time. Again, this means, the next loop will proceed further than it's previous loop because of more npars for each successive ip2. Actually, it reaches a point where ip2 passes a barrier, say near 10u, and breaks free to combine particles all the way to ipmax. However, we set that other boundary that when np1 becomes 1/2 of np1meg (np1min), then we also break out of this same loop but don't repeat this loop again because we are already below the 1/2 of np1meg level. These 2 ways to break loops saves cpu time which could run for days on a cpu processor of near 1.8 megahertz.

Since when np1 is reduced to 1/2 of it's original, it's first combo (ip2=2) has more particles than the next (ip2=3) and possibly more than the ip1=1. This means that the probability of capture combines is greater than with ip1=1 but this slightly larger particle is slightly more difficult to capture than ip1=1u. So, we set if(np1 < np2/2), then we have a new ip1=2 combos instead of testing for a np1min. When do we end the program? We decided to do this at p2 > 2u or ip1 > 8. The larger particles take many more ip1 combos in their 1u diameter range than the smaller particles since the captured smaller particles are spread over a larger surface are making each new layer a thinner shell. Actually, it was thought that we might just condense a fixed % of dust vapor directly onto the particles because of it's vapor pressure. Usually this occurs as moisture collects on hygroscopic nuclei. We have no info on this and start with 1u particle size, that should probably be called vapor also. With the many loops in the probability mini-model, this action appears sufficient since the expansion by cooling, which condenses vapor, also creates the turbulence mixing where we apply the mini-model so the vapor effect may be closely duplicated anyhow by the coalescing process.

                (e) Extract & Convert Binary Data File: GPDP1.c

The next program extracts this binary file by bit shifts to convert the base number and it's exponent to npars and converts the ip combos back to diameters then sums up the distributed npars for each 1u class interval. Later, the program converted the # of combos to diameters, halved them to radii and cubed (r^3) and summated for % activity comparison.

                (e) Preliminary Test Results:

We call this coalescing mini model "Plasma Dust Particles Loop % Distribution". The results of earlier runs are seen in the first 2 columns of PDPLD.1 and PDPLD.2 . Also, we show the relative radiation % activity as a reference, which is seen in the 3rd column proportional to npar x sphere volume. Since proportional and converted to %, we can ignore the 4/3 pi portion of the sphere volume formula as the constants cancel out.

                (f) Final Results:

The final run used all the improvements from all the topics mentioned above. As mentioned before, this final run was seen in the file flash.pda . We tried to write out the number of loops and iterations but using fixed formats instead of exponents, the numbers exploded. My accurate estimate was that the program ran equivalent to 35 days. The number of iterations was at least in the 10 of trillions. With such a wide fluctuation range of total # of particles, they had to be plotted on a log-log scale and smoothed. The graph of these are seen in Pic 0 and Pic 1. The first pic shows that the transfer of the 1u-2u particle sizes reaches a minimum near 4.5u then builds coalescing up to 200u then cuts off. The activity plot amplifies the upswing due to the r^3 and the increase number of larger particles. The data from flash.pda is really a process from converting the smoothed graph from pixels to data by a previous method use in the WTC Dust Bowl Study Pix-Data.

                (g) Volume In = Volume Out Check:

Even though we have the plots from the data, is it right? Finally, at the end, we multiplied by 4/3 pi to give us the cumulated volume. So in the beginning of the program, we took the original volume, converted it to # of 1u diameter particles, combined the possible particle combinations and summed up these particle volumes in the next program to check against the starting volume of 8.4312e25 u^3. The result was an exact match - a stunning surprise after 35 days of calculations combining most of the 1.61e26 particles with no loss of volume within 3 nested loops. Also, by the data, we see the range of exponents so the adding of these different ranges could cause some lack of precision. Thefore, we redid everything and dumped the binary file concept and went straight to the full format of double precision numbers. See this difference in the float file and the double file .

                (h) Conclusion:

We now assume that the particles are distributed in progressive size in the nuclear cloud cap and have a fixed volume to coalesce. During the creation of the cap, the particles are diverging, as it swirls, these particles reach the bottom of the cap. So the volume for this nuclear cloud cap for 1 mega was 7.790e+30u^3. However, this was for only one weather condition and for 1 mega. This will have to do for the other weather conditions with different volumes which we can test later. But in real time, we can't incorporate this each time because it took 35 days, and looped through the mini model over 50e9 times. In our first estimate, we assumed the volume of dust from the crater and city demolished buildings was 2.977e9'^3 = 8.4312e25u^3 into the cloud cap. By all the refinements shown in the above topics, we overestimated mainly because of the flash - non flash particles which are now quantified. However, this would indicate too much coalescing due to the higher concentration of particles. Since this takes so long, it may have been helpful by doing this as it shortens up the coalescing run time but are the results valid? Probably minute differences since higher concentrations speed up coalescing which balances what would happen if we ran the program longer. We ignored the minute stem concentration and considered all of it went into the cap to cool and coalesce. We found out working with decimal % of concentration was better as we can now apply any flash volume or tons of dust.

Radioactive Fallout Model (Radfo) - Part II :