I think I have found a way to find the moon's total drop velocity. For this, we need to draw an earth circle, with a lunar-orbit path starting directly above the center of the circle and ending one quarter the way around at the 9 o'clock side of the circle. We then draw a horizontal line from the start of the path (at the 12 o'clock position) toward the left, and we call this the direction of the lunar path if gravity were to be turned off suddenly, with the moon neither dropping nor rising.
The Moon's Total Downward Velocity
A quarter orbit takes 27.32166 / 4 = 6.83 days = 163.4744 hours. As we'll use 252,400 miles as the lunar distance, the distance between the moon at the 9 o'clock position and the horizontal line is 252,400 miles. The moon has thus moved 252,000 miles toward the bottom of the page over 163.4744 hours (= 588,507.84 seconds), and these numbers should allow us to find what I'm calling the invisible drop-velocity, the part that causes the circular orbit but doesn't have the moon nearing the earth (if this confuses you, see "invisible" in the last chapter for the explanation). If it were true (it's not) that the moon dropped a distance that is equal between the horizontal line to the 9 o'clock position, the drop velocity works out to 252,400 / 163.4744 = 1,544 mi/hr.
However, when the moon is midway between the 12 o'clock and 9 o'clock position, it is falling on a diagonal, toward the earth core. It is therefore falling half toward the bottom of the page, and half toward the side of the page. When it gets to the 9 o'clock position, it is falling zero toward the bottom, and fully toward the side. My reasoning is that the fall rate (from the horizontal line) at the 10:30 position should be used for the entire quarter-orbit. In other words, the total fall distance is whatever the fall is on a diagonal line (45 degrees), which is half the 252,400 miles. We therefore re-do the math as 252,400 / 2 / 163.4744 = 772 mi/hr. I think I have this method correct. If we go a half orbit instead of a quarter, both figures are doubled so that the velocity remains the same.
In our drawing, the orbital path is perfectly circular, having no ellipse. It means that there is no lunar acceleration in the down direction; all we have is the moon falling constantly at 772 mi/hour (345.115 meters per second), explaining why the altitude remains the same. So long as the moon does a perfect circle from 252,400 miles away, it will drop at 772 mi/her. This method represents is an important piece to the puzzle. We can see in this that the moon never accelerates until it starts to near the earth.
At its furthest distance from the planet, at apogee, the moon MUST be moving perfectly parallel with the earth surface, because it's neither becoming more or less distance from the planet. Therefore, the situation is exactly as it would be when conducting a test to see what the acceleration rate would be of a free-falling object. There are said to be 14 days, 16 hours and 10 minutes between apogee (July 15, 2000) and perigee (July 30), or 352 hours x 3,600 = 1,267,200 seconds.
I can now test NASA's lunar acceleration rate, .002414257 meters/sec2, with the findings above. Here is their formula, with their number, for finding the fall distance over the apogee-perigee time period: .5 x .002414257 m/s2 x 1,264,200 seconds x 1,264,200 seconds = nearly 1.2 million miles. The moon obviously doesn't fall visibly more than a million miles, but let's see whether it falls that much invisibly, according to the 772 mi/hr over 351.1873 hours: 772 x 351.1873 = 271,117 miles.
It is interesting to know that, if we had used exactly a half month instead of the apogee-to-perigee period (more than a half-month), the drop distance would have been 252,400 miles, which is the lunar distance. A half month is 27.32166 / 2 = 13.6608 days = 326.95 hours, wherefore we re-do the above as 771.9863 x 326.95 = 252,400.9 miles. It's just the way she works with a perfectly-round orbit. There cannot be a perfectly round orbit, or the moon would continually accelerate and hit the planet. A stable orbit requires an oval / ellipse causing some deceleration to off-set the acceleration in roughly half the orbit. It's a fantastic creation by the fantastic Creator. Give credit to where credit is due. Do not be like the goon-led nature shows that extol the creation without the mention of God. The goons who lead National Geographic will pay the price.
As you can see, the acceleration from this method does not substantiate the lunar acceleration figure of the astronomers. What's the problem? I'd say it's because they cooked the books.
The way to find the acceleration number of an already-moving body is to subtract the start velocity from the end velocity in the time period concerned (this gets the average velocity). In this case, we will use 772 mi/hr = 345.115 m/s as the start velocity, and for the end velocity, we'll use the number obtained in the last chapter. In the first few days of releasing the last chapter, I had the apogee-to-perigee acceleration at .00006236 m/s2 due to counting the wrong number of hours between apogee and perigee. My apologies. That number has since changed to .00005956.
We can now find the average velocity between apogee and perigee, and then tack it on to the 772 mi/hr (345.115 m/s) obtained in our drawing. The moon fell 29,714 miles = 47,820 kilometers (over the 1,264,200 seconds), wherefore the math can be done: 47,820 / 351.23 hours = 136,150 meters per hour = 37.82 m/s on average. The latter number is not quite right because it should be 37.737. I want to show you a thing from the last chapter, when using 24-hour periods instead of 23.9333 (it may signal that the program which calculates the apogee and perigee distances (I'm using) was using 24-hour days):
We can do the same math over the apogee-to-perigee period of 1,267,200 seconds if we had the acceleration velocity over that time period. The .00005956 figure, with the free-fall calculator, has a velocity of 75.474 m/s when we use 1,267,200 seconds instead of 80,400. Therefore, we check: 75.474 m/s - 0 / 1,267,200 = .00005956. As the velocity figures given by the calculator are twice the average speeds, the average velocity is 75.474 / 2 = 37.737 mi/hr. The moon fell 29,714 miles = 47,820 kilometers (over the 1,267,200 seconds), wherefore the math can be done: 47,820.6 / 352 hours = 135.854 kilometers per hour = 37.7367 m/s on average, which is virtually the 37.737 above.
The extra math was to check things, make sure I was getting the numbers right. But when we use the more-precise numbers in this chapter, the average velocity is 37.824 m/s. I'll use 37.737 and 1,267,200 so that one doesn't get inconsistency with the lunar distances given by the apogee/perigee calculator. It was determined that the velocity at the end of the entire fall, at perigee in this case, is twice the average (37.824), or 75.474 m/s, and we need to tack it to 345.115 for a total velocity of 420.589 m/s = 941.1 mi/hr (at perigee, July 30). Let me repeat that, 345.115 is the velocity in a perfect-circle scenario with zero acceleration. I am assuming that this is the constant free-fall velocity of the moon for the entire orbit, aside from the acceleration and deceleration in the real situation. I'm therefore viewing the moon's fall rate at 345.115 m/s at apogee (July 15, 2000), and 420.589 m/s at perigee.
When it got to apogee again, late on August 11, the lunar distance was not as far as it was a month earlier, and the velocity of downward drop wouldn't have been identical. This is the wonder of it all, that while the acceleration and deceleration changes with different ellipse shapes from month to month, the moon manages to stay at the same average distance from the earth, a miracle that evolutionist / astronomy goons are willfully blind to. The planets perform the very same miracles, and this is God mocking the goons, showing them how superior to them he is. Yet, the stupids refuse to acknowledge Him, and they insist on bringing the entire human race to Hell with them.
The formula is: "Acceleration = (Velocity Difference) / (Time Difference)". We can now do: (420.589 - 345.115) / 1,267,200 = .00005956 m/s2 (= the same number as above when ignoring the circular-orbit picture and using only the 75.747 velocity at perigee). It's not near their number of .00241. I have no idea how they can justify that number. The formula at the free-fall-calculator page is .5 x acceleration x time x time = height of fall, or .5 x .00005956 x 1,267,200 x 1,267,200 = 47,820,600 meters.
My problem is, the acceleration figure (.00005956) doesn't stay the same for a calculation at the eclipse, at a known time of 80,400 seconds after apogee. One can find the distance of fall if one has the 80,400 seconds as well as the acceleration number or the distance of fall (I have neither of the two). The way I understand it, this number is the average acceleration, not the acceleration at every point along the path from apogee and perigee. The formula for finding acceleration is: distance of fall / time / time / .5. By changing the distance and keeping the time identical, the acceleration figure changes. However, it seems to me that one should be able to use the .00005956 number to see how far the moon has dropped after 80,400 miles. I wasn't ready to try this while writing the last chapter.
The way it works is that, in the second second of fall, the velocity is .00005956 x 2 m/s, while the drop distance is .00005956 meters x 4. To put it another way, the distance of fall is always twice as much as the velocity increase. After 80,400 seconds, the velocity ought to be .00005956 x 80,400 = 4.788624 m/s (10.7 mi/hr). And I have just figured out, as I write, how to find the distance of fall. It always happens to be the velocity times half the velocity. For example, with 1 meter in the g box and 80,400 in the top box, the velocity is given as 1 x 80,400 while the fall is given as 3,232,080,000 meters, which happens to be the result of 80,400 x 40,200. This is a great little "secret" to have. The only problem is, it only works with 1 in the g box.
But wait. When we enter 2 in the g box (80,400 in the top box), the velocity x 1/4 velocity gives the correct fall distance. With 3 in the g box, it's velocity x 1/6 (.16666) velocity gets the distance. With 4 in the g box, it's velocity x 1/8 velocity (.125). With 5 in the g box, it's velocity x 1/10 velocity (.1). With 6, it's velocity x 1/12 velocity (.08333). With 7, it's velocity x 1/14 velocity (.0714). Is there a pattern to be exploited? I see it. It's g-box / g-box-squared x 2. Here it is:
2 / (2x2 x2) = .25;
3 / (3x3 x 2) = .16666
4 / (4x4 x 2) = .125;
5 / (5x5 x2) = .1.1;
6 / (6x6 x2) = .08333;
7 / (7x7 x 2) = .0714.
For the first entry, the drop distance is velocity x .25 of the velocity, which is mathematically V x .25 x V. This means that I can find the moon's altitude on the moment of the eclipse with 80,400 in the top box and .00005956 in the g box, which gets a velocity of 4.788624 meters. Our long-hand formula, and solving step-by-step, is:
4.788624 x (.00005956 / (.00005956 x .00005956 x 2)) x 4.788624
= 4.788624 x (.00005956 / (.000000007095)) x 4.788624
= 4.788624 x (8394.881) x 4.788624
= 192,502.3375 meters = 119.6 miles
Excellent. In the last chapter, I took a stab at it by the only way I knew how, knowing I'd be close but not sure whether it was bang-on. I said: "When we put .00005956 into the free-fall calculator along with 80,400 seconds, we get 192,502.685 meters of fall = 119.6 miles (that's more like it), and a velocity of 4.79 meters per second = 10.7 miles per hour. " Here I now find the formula that gets the same 119.6 using the same 10.7 mi/hr (4.788624 m/s above). In other words, we don't need the long hand. All we do is what was done, simply, in the italics. It seems to disclose that, once the acceleration figure is established between apogee and perigee, it can be used for any point in time along the way to find where the moon was in altitude above the earth. This is good news, indeed. We can now find what the moon altitude was at any time, so long as we have the apogee and perigee distances, and so long as it's correct to use the same acceleration rate at any time during this half orbit.
Just to verify that the long-hand version works, let's do it with the velocity figure (75.47 m/s) given for the moon at perigee:
75.474432 x (.00005956 / (.00005956 x .00005956 x 2)) x 75.474432
= 75.474432 x (.00005956 / (.000000007095)) x 75.474432
= 75.474432 x (8394.881) x 75.474432
= 47,820,515.2 meters = 29,714.3 miles
The free-fall calculator has 47,820,600 when we enter 1267200 into the top box and .00005956 in the g box. My final figure is a little off, probably because I'm not using all the decimal points, but the moon is said to have been 29,714 miles closer to earth at perigee than at apogee.
It's true that the acceleration rate changes as the moon gets nearer to earth, raising the question on whether the number I'm using is the average figure, and, if so, whether it's accurate only for midway between apogee and perigee (i.e. at the average lunar distance). I don't think that is correct. The way I see it, and I'm sure that this is correct, the acceleration number is the distance of fall (in meters) for the first second of fall starting at 252,400 miles away. I can see no reason that we can't start with that premise, and work 80,400 seconds later to calculate the fall distance at that very time. I had doubted this in the last chapter, but with a clearer mind now, I think it is correct. If we start the fall at the eclipse and end it at perigee, we definitely will get a different acceleration number, but that is not permissible because that was not the reality. The fall started at apogee.
I had lamented a couple of weeks / chapters ago that one might never be able to find lunar distances because no one seems to be publishing a method to find them. NASA, it seems, does not want us to find a way, and I understand why not: NASA is afraid that we'll discover the 93-million-mile hoax using lunar-eclipse lines.
I can now do the math better than before for finding the true solar distance based on an umbra diameter of 6,000 miles. This unknown is the last thing I need to find before I can clinch the true solar distance. I now have the lunar distance of 252,400 - 119.6 = 252,280 miles, whereas, previously, I was guessing with 250,050 miles, not very close at all to the reality. Here is what the picture looks like with 6,000:
With a shadow diameter of 6,000 miles, the distance from shadow edge to earth edge is (7,918 - 6,000) / 2 = 959 miles. The latter number in the edge-a box along with 252,280 in the edge-b box gets .217799 degree, and with the solar radius on that day at .524555 / 2 = .2622778 degree, it's a difference of .2622778 / .2178 = 1.20422 times. The solar distance using the 959 figure is like this: 3,959 / (.004578 - (959 / 252,280)) = 5.0974 million miles. We multiply the latter by 1.20422 above to find that the E-M triangle is 6.1384 million miles long, giving an earth-shadow length of 6.1384 million - 5.0974 million = 1.041 million miles, and the right-angle calculator has it at 1.0415 million when fed .2178 degree [put 3959 in edge-a box]. It means my math checks out.
The problem is, I was able to see that astronomers in relation to NASA altered the times for the events of the eclipse so that one cannot be sure on the size of the eclipse even when using the correct velocity of the moon in its orbit. The eclipse page shows the time of one umbra diameter as 2.87 hours. If the moon were moving at the slowest-possible velocity (NASA says 2,156.40658 mi/hr), the umbra works out to be 6,189 miles wide. I would know to use a slightly larger number, because the moon was faster than its slowest velocity at the eclipse, except that I can't trust the 2.87 hours given by the eclipse records. The records added a few minutes too many between U1 and U2 in order to get the lunar velocity down to 2,000 mph, in order to get a false umbra diameter of only 5,738 miles, and the question is whether the addition of these minutes changes the 2.87 hours between U1 and U3; I was unable to arrive at an answer.
As the moon decelerates over roughly half the orbit, that's like someone pulling a string on a free-falling object, spoiling the free-fall, but, for the other half of the orbit, from apogee to perigee, when the moon accelerates, the string is no longer pulled, and the moon falls freely. This is why one can use the apogee-to-perigee half to find true acceleration. There is nothing happening at that time but the moon being pulled freely to earth, without the obstruction of orbit angle, away from earth, as there is between perigee-to-apogee. It's the angle away from earth that turns acceleration to deceleration.
Here is the scenario using the acceleration figure that NASA would use. We put their number, .00241 into the g box along with 80,400 seconds in the top box. We find the calculator giving a fall distance of 7,789,312.8 meters = 4,840 miles. That is, their number claims that the moon, one day after apogee, at mid-eclipse, was 252,400 - 4,840 = 247,560 miles from earth. NASA's own eclipse page tells in various ways that the umbra diameter was 5,738 miles (explained in previous chapters). The astronomer's scenario is therefore like so:
With a shadow diameter of 5,738 miles, the distance from shadow edge to earth edge is (7,918 - 5,738) / 2 = 1,090 miles. The solar distance using the 1,090 figure is like this: 3,959 / (.004578 - (1,090 / 247,560)) = 22.6 million miles.
It doesn't work, wherefore we should like to ask NASA why they give their lunar-acceleration figure as they do. I've been using the method above for weeks, and thus far I have found nothing wrong with it. It seems full-proof. It's a simple way to draw the lunar-eclipse line off the umbra diameter, and to compare it with the solar line off the so-called angular size of the sun on the day of the eclipse. There are just two lines, and where they meet, that's where the sun sits. With an umbra diameter of 6,000 miles wide, and using my acceleration figure, the sun is 5 million miles away. Using NASA's numbers gets 22 million. In order to get 93 million, they would have needed an even higher acceleration figure, but, likely, they realized they were pushing it already with their false claim for .00241.
A higher number makes the moon at the eclipse even further from it's apogee altitude. Someone is bound to notice the problem. They already have the moon dropping almost 5,000 miles in a mere 22.333 hours (80,400 seconds). This is the period when lunar fall is at its slowest. Apogee is like the ball into the air, as it gets to the greatest height, where it falls back down with the slowest velocity at the beginning of fall. But if the moon did 5,000 miles in one day, consider how much further it would have been, more than 13. 5 days later (perigee). They say the apogee-to-perigee distance is only 29,715 miles. I would like to hear how NASA justifies its lunar-acceleration rate.
I have found verification that an object moving parallel with the earth's surface falls at the same rate as an object simply dropped: "An object is pulled directly downward by gravity with uniform acceleration whether or not it is flung outward or just dropped" (Chapter 2 Galileo's Great Discovery: How Things Fall; I don't have the webpage or the author).
The acceleration of lunar gravity is not to be confused with the acceleration of the moon toward the earth. The page below says that the moon is continually accelerating toward the earth, which cannot be true. It is always dropping, but not always accelerating. About midway down the page, it gives the lunar acceleration as exactly/nearly 3,600 times less than the 9.81 acceleration of earth gravity at the earth surface, or .00272 m/s2 (its number). Others use .0028. This tends to answer how they arrived at their figure, by the inverse-square law of gravity. They get the distance to the moon, and figure out how many times weaker it is than at the earth's surface. This is a good method to use, but the number is not in line with the lunar drop rate. I assume that their number is claimed for the average lunar distance.
I'm testing this claim. I show the numbers below starting with gravity at ground level, 3,959 miles from the core, having a corresponding number, 1 times the force of gravity. The succeeding numbers show twice the distance with the corresponding 4 times less gravity force:
It's great that we arrive to 253,376 miles from earth, nearly the distance at apogee. As you can see that gravity is predicted to be 4,096 times weaker at that altitude, it seems that 3,600x is roughly correct for the average lunar distance. As one can glean about 4,000x at 252,400 miles up, one can also find the lunar acceleration, by this method, simply with 9.8 / 4,000 = .00245. This is virtually the .00241 obtained from NASA's number. Here is how it was first obtained:
NASA has a fact sheet using 5.9723p24 [for earth mass]...Therefore, according to NASA, the lunar acceleration was 5.9723p24 x 6.67p-11 / 1.65p17 = .002414257 m/s2, where 1.65p17 is obtained from 406,198,425.6 squared.
As 406,198,425.6 meters = 252,400 miles, it explains why .00241 is obtained, the number they expect for that height above the earth. We now know it's true that NASA, and all the rest of the goons, have fixed / cooked their gravity-constant and earth-mass numbers so that they serve to get a lunar acceleration in line with the inverse-square law of gravity. But something is wrong here. Why haven't they used the method I've shown to get lunar acceleration? Just measure the moon's drop between apogee and perigee, and voila! It's so simple and reliable. Why isn't the inverse-square-law scenario coming up to speed with that reliable method?
Let's re-do the gravity scenario using an initial height of 2,000 miles between the center of gravity and the earth surface, for it's not necessarily true that gravity originates from the core of the earth.
That changes things perfectly for the .00005956 acceleration number, for we now do 9.8 / 16,100 = .0000609. It's perhaps true that gravity attracts all things toward the core, but this doesn't mean that the center of gravity must be at the core. One gets that impression if the entire earth is the gravity source, but it clearly is not. Rather, the internal heat is the source, and, for all we know, there is no heat at the core. I feel inclined, by the evidence just presented, to view the center of the gravity source about midway between the core and surface. I can't prove it, but the numbers have just presented the theory. If I had not developed the heat=gravity concept, I would be wholly out of luck explaining why their acceleration figure is far incorrect.
If the rocks produce weight, and weight produces heat, what came first, the rocks or gravity? If there were rocks without internal heat, there would be no weight to produce the heat. You should never think of any situation without a Creator in the picture. He allowed demons amongst men to confound His enemies, to lead them into foolishness, with pride. Pride in their stupidity, the condition of earth today and yesterday. I was stupid, but I then saw the Light. It was a shining Light. What I mean is, I saw straight. I no longer had blinders on. I SAW. I once did not see, and then I SAW. What did I see? I saw that my superiors were fools. I saw that they were lying all over the place, and clinging to error. When I tried to share God with peers, they were not wanting to see. It wasn't that they were unable; they were not wanting the Creator in their lives. They believed they would be happy without Him. He was considered a liability in their lives. He was of no use to their plans. They had the wrong impression about the meaning of life. They did not respect their own Creator. All of Western society was patterned after atheism, and God would be shoved out of all things touched by government. Any little thing with God in it had to be wiped away. Arrogant and stupid, that is the condition of this world. In one generation with fools changing it liberally, society became infested with violence and sin.
Now that we know the distance of the moon at the eclipse, can this tell us what the shadow diameter was where the moon crossed through it? The way to know the shadow diameter at any given point is to know the sun's distance and size, and the way to know the sun's distance and size includes knowing the shadow diameter in a lunar eclipse. We are stuck. NASA is hiding the truth. There is a way to know the shadow diameter in units of lunar diameters, by measuring the time between U1 and U3 of a central-path eclipse.
The most-central one I could find between 1900 and beyond 2016 was July 16, 2000. NASA claims to have online records of all lunar eclipses in the 20th century, but I was impressed by the fact that NASA was showing so few central-path eclipses. I could find only two of them, the other in July of 2018. This is an opportunity, less than two years from now, for the world to measure the U1-U3 time, to make sure that NASA does not report false times. The problem is, the eclipse will be centered off of the east side of Africa (it's winter there in our summer). Yet, anyone wanting a summer vacation in that part of the world can go measure the time between U1 and U3. Perhaps we can contact an astronomer in that part of the world, and alert them to NASA's game. Perhaps a Christian astronomer (In Australia?) would be happy to measure the U1-U3 time. Here are the 2018 details:
There is a map at the bottom of the page showing U1-U3 in relation to New Zealand as well as a U1-U3 in the middle of the Atlantic ocean. This confuses me. Whatever it means, it seems that the best place to view the event is at the horn of Africa, Madagascar, Saudi Arabia, or the tip of India. It might be true that, the further from the center of the eclipse one times it, the more erroneous the time measurement becomes.
Is there another way to get the shadow diameter at any given point from the earth? All we know, the diameter is 7,918 miles (or whatever the earth diameter is) at the earth's edges, and it goes to a pin-point from there at a certain angle. The angle depends on the sun's size and distance, wherefore the umbra can be correctly drawn if there is another way to know the solar facts. In that case, we wouldn't need the umbra diameter, although it would be good to have it so that we may correct NASA on its eclipse times. We can call NASA out to explain itself.
The other way to get the solar distance is to get the accurate parallax angle. These lines go from the edges of the earth to the center of the sun. It's a sure-fire way to know the solar distance, if one can get the angles measured correctly. That's a big if, and astronomers are clearly feeding us garbage on their .00244 angle (in degrees). Hmm, that number is almost their lunar-acceleration number. We read: "The quest to measure the AU [solar distance] became one of the central pursuits of astronomical research in the late-17th, 18th, and 19th centuries. In many ways, this obsession with cosmic distances, and the great efforts expended in this quest, find a strong resonance with our own pursuit of the size of the Universe." Isaac Newton had his own solar-distance calculation based on the parallax angle.
We read further: "Halley calculated that if you can get the timing precision [of a Venus transit over the sun] down to 2 seconds, then it should be possible to measure the sun's parallax to a precision of 1/40-th of an arcsecond, which would provide a distance to the sun with an unprecedented precision of 1 part in 500! (This number was based on his assumption that the solar parallax would be 12.5 arcsec)." The writer is exclaiming this statement because he's a typical goon intent on deceiving the reader with the run-of-the-mill garbage on the Venus-transit experiments.
My claim is that the goon club of Newton's time had already decided to expand the size of the solar system on behalf of the developing cosmic-evolutionary theory. Haley's figure, 12.5 arc-seconds, is .00347 degree. When we put that in the angle-A box of the right-angle calculator (below) along with 3959 in the edge-A box, the solar distance comes out as 65 million miles.
The purpose of quoting from the article above is to show that they were using the Venus transit to measure the solar parallax, and this comes with smoke-and-mirror deception. You never really get a good grasp as to what they were doing with this experiment, and, if you do, you can check for circular reasoning. You can find it. I did. The point is, why didn't they just measure the solar parallax directly? I think I know the answer. It didn't give them a sun as far as they wanted it. There is no reason that they cannot take the solar parallax to the center of the sun. I agree that it's virtually impossible to get the angle to the edge of the sun (there are no defined edges, really), but the center of the sun is at a knowable location. So, just get two telescopes as far apart as possible on the earth, and measure the angles to the center of the sun. Done deal. Yet, we don't hear of such an experiment when we google "solar parallax."
If it's not possible to get the angle to the edge of the sun due to the edge being undefined, how could the astronomers coming after Haley measure the transit of Venus accurately, since they were timing it from edge of the sun to edge of the sun??? If they thought they could measure the contact of Venus with the edge of the sun so nearly to perfection, why didn't they just use their telescopes as I suggest above? Just measure the sun's edges with the telescope. But, nope, that won't do. I'm sure I know why, don't you?
Haley was taking some of his information from Newton. I'm reading: "In the first edition, Newton assumed a solar parallax of 20 arc seconds corresponding to taking the solar distance at about 10,000 radii or 5,000 diameters of the earth." Twenty arc-seconds = .00555 degree = a solar distance of about 40 million miles. And this was Newton's first-published figure; he may have increased the solar distance later. Already, 40 million miles was huge. Already, it begins to paint Newton (a leading Rosicrucian) as an evolutionist's friend. The article then says that there were men (Cassini and Flamsteed) in his day wanting to bring it down to 10 arc-seconds (= greater solar distance). Modern times claims the solar parallax to be 17.28 arc-seconds when two eyes are one earth diameter apart, and half that (8.64) when two eyes are one earth radius apart. As they commonly say that the solar parallax is the smaller of those two numbers, I assume that Cassini's number was with two eyes / telescopes one radius apart.
Alas, in his second edition, Newton changed the 20 to 10 arc-seconds! There you have my own exclamation mark to mark the hoax rich in Newton's mind. How was he arriving to this figure? By what coincidence did it agree exactly with Cassini and Flamsteed? Can we imagine peer pressure on Newton to convert? We then read that his third edition had 10.5, virtually the angle held to today. Cassini found solar parallax by first finding it for Mars. But, again, why didn't he just point the telescope to the center of the sun from two different locations? Apparently, one can get some hocus-pocus magic when measuring the Mars parallax that then converts the solar parallax to the evolutionist's garbage.
The page below has simple-to-understand parallax measurement. It shows that angle measurements are taken to the center of objects, and so why didn't they just do it to the center of the sun? Really. The writer takes your two eyes (say 3 inches apart) as the two telescopes at a distance apart, and asks you to measure the angle from each eye to the center of your finger 2 feet in front of your eyes. It gives the formula for finding that angle from one eye as: distance between your eyes / 2 / by distance to your finger, or .25 feet / 2 / 2 feet = .0625. This result is not the angle, but the sine of the angle. Wikipedia: "...the sine of the angle is equal to the length of the opposite side [of the triangle] divided by the length of the hypotenuse." The hypotenuse (always defined as the side opposite the 90-degree angle) is the line from eye to finger, and the opposite side refers to the side opposite the angle of concern. As the formula includes the distance at the eyes, the angle of concern is not the one at either eye, but the one at the finger. That is, the angle of concern is at the tip of the triangle. Therefore, the formula at the page below is the same as we find in Wikipedia's statement above: distance between eyes divided by distance between eye and finger (to find the angle from one eye only, divide by 2, as you see in the formula above).
We can appeal to the right-angle calculator to find the angle. There you see, in the drawing, the hypotenuse as line C, as well as angle A at the tip. Angle C can be viewed as the upper nose area, and B is your one eye 1.5 inch = .125 feet away. Just put .125 in the edge-a box with 2 in the edge-b box and hit the calculate button to find the angle at 3.58 degrees. To find what this is in arc-seconds, multiply it by 3,600.
The article then says, "Now Cassini used the same method to measure the distance to Mars by using the two "Eyes" on the earth". We then find that the formula used for the eyes is plugged with two numbers, the top number being 12,000 kilometers (the distance apart of two telescopes), and the bottom number being 54.1 million kilometers (the hypotenuse), which, I assume, is the distance of Mars at its nearest to earth, as Cassini conceived that distance (the claim is that Cassini was taking the angle measurement when Mars was nearest the earth).
The angle from 12,000 / 54.1 million is given by the article as 20 arc-seconds. Is this correct? Is this what Cassini claimed for his angle measurement? We can use the triangle calculator in two ways to check this math. Technically, we should put 6000 (no comma) in the edge-a box with 54100000 in the edge-b box, and then multiply the result (.0066 = angle at one eye only) in the angle-A box by 2, but we can alternatively use 12000 in the edge-a box (C becomes the second eye) to find .0132 degree (double .0066) as the angle at the tip of both lines (= angle of both lines / eyes combined). The .0132 figure, when multiplied by 3,600 to find arc-seconds, gets 47.52, which is not 20. What's going on? Is this a trick? Is the writer disclosing the math while not realizing that it doesn't work?
It says that Cassini's two eyes were in Paris and Guiana (South America), but this is not nearly at opposite sides of the earth (I'm assuming that there were 12,000 kilometers between Paris and Guiana). Apparently, Cassini had to extrapolate his angle to create a drawing / situation that had both telescopes 1 earth diameter apart. But when we put the earth's diameter (7918) in the edge-a box along with .00555 degree (= 20 arc-seconds), the distance comes out to 81.7 million miles = 131.5 million kilometers. This is not any meaningful distance between Mars and earth, and it certainly isn't 54.1 million kilometers.
There is precious little on Cassini's work when googling " cassini parallax ". So far as I'm concerned, the page under discussion is pure garbage. It doesn't satisfy the one that wants to verify Cassini's work. The page below is about nothing but Cassini's measurement of the Mars parallax, but tells nothing of the numbers or method. I wonder, are Rosicrucians (or that ilk) the only ones who push Cassini's work, deceptively?
Wikipedia's article on Cassini: "In 1672 he sent his colleague Jean Richer to Cayenne, French Guiana, while he himself stayed in Paris. The two made simultaneous observations of Mars and, by computing the parallax, determined its distance from Earth. This allowed for the first time an estimation of the dimensions of the solar system..." The FIRST time. For me, it signals that Rosicrucians were starting their hoax already. Wikipedia's article gives NOTHING on where we may find Cassini's work, formula or numbers for his Mars parallax. Such a great discovery, but nothing offered. We are simply to trust the goons.
The Orbiting Electron
One of the ways you can be sure that modern physics is still wacky, still holding on to evolutionary lunacy, still not thinking for itself, and not thinking logically, is the belief that every atom has one proton per electron, and that the electrons are in literal orbit. This is hilarious, a sure-fire way to know you cannot trust anything that science tells you. Every proton, they say, has a positive charge exactly equal to the negative charge of an electron, and, for this reason, a proton can only attract one electron. Therefore, they think, if they can figure out a way to count the protons at the center of an atom, they can also know the number of orbiting electrons. It's very convenient but hilarious. They show their utter stupidity. The question is, why are they will to succumb to such utter nonsense? Evolution is the answer.
First of all, one needs to base all of modern physics on a big bang. There is no reason to assume that God would have created the universe with such a bang, and of course there is no reason to expect that all materials, in a universe void of a creator, originated at one speck, they exploded out. It is a dumb idea, and you will read that this speck was smaller than the head of a pin. The entire universe, they will say without blushing, was trapped inside this small speck, until it exploded. This is not thinking correctly. Not that I have the answer on how the stars were formed, but, certainly, it wasn't with this explosion.
Can we see why they would make the pre-explosion speck so small? Why not have it thousands or millions of miles wide? It's because they have such an over-blown size for the universe. They actually sat down and tried to figure how small it all needed to be packed in order to explode as far as they think it has gone outward. And they believe they have found evidence that stars are moving away from each other in all directions, which is the only evidence they have that a big bang took place. However, even if the stars are moving away from each other (they probably are), they can be doing so due to their forceful release of solar-wind electrons. As stars fill the universe with electrons, the latter get more dense between stars, and, along with their momentum in all directions, their inter-repelling one another push stars further apart.
A physicist has the ability to understand that stars shooting particles outward will cause all stars to move apart. Yet, I have never read this from anyone, as though the evolutionist doesn't want anyone to get this point because it tends to spoil their claim for a necessary big bang.
Just think logically, and do not trust that a physicist really knows what he's talking about as he apes his superiors. As soon as you see orbiting electrons, you should have the sense to know that they are all a class of bone heads all aping one another for a motive. Men tend not to agree with one another, but if they agree together with the impossible orbiting-electron scenario, you know that they have a common motive for being in agreement. What happens to material as it explodes? Well, it rips apart and becomes destroyed. The evolutionists have the speck so small, and the explosion so forceful, yet they say that all protons came out exactly alike, and all electrons came out exactly alike, and they have all neutrons exactly alike, and they are inventing new particles all the time that are exactly alike. If you want an ordered, working universe, you don't start with an explosion that rips your primary particles into pieces that no longer works. That's rule number one. But they are so ridiculous that they claim a wee-little proton cannot be destroyed. I don't know what they think it's made of, but, surely, you can see that they are lunatics. They really need to be jailed for this disinformation, and, you can be sure, God will jail them like One enjoying a great pleasure.
So, an evolutionist starts off with this explosion sending perfect protons and perfect electrons in all directions. The electrons are not orbiting protons yet because both are racing straight ahead through space under the force of the explosion. The nature of particles moving outward from a single location is to move constantly further apart. How will they ever come together in order to form galaxies, stars and planets. Don't worry, the evolutionist says it happened, so all the boneheads in the world believe him. It's as though the world is under their spell. And how does this spell work? The evolutionist has a way of conveying himself as a scholar, testing everything, ever seeking for truth, ever smarter than anyone else when it comes to correct gleanings. Yet, in this regard, he is bankrupt on logic. He has no explanation, and yet, without it, thanks to boneheads not thinking for themselves, he gets the masses to trust his claims.
What happens in a situation as protons and electrons, all racing in straight lines, attract one another, as do protons and electrons? It doesn't really take super intelligence to figure this out. I'm not saying that I'm smarter than the boneheads. I'm saying that I don't believe the physics teachers while boneheads do, after they keep hearing, over and over again, that the big bang is a respectable theory. This is the spell cast by the God haters; they act superior in intelligence, and beg the masses to come join them. We have labored hard and freely for truth, and figured out that electrons orbit atoms, they say, and the boneheads believe it.
To put something in orbit, it needs to be flung perpendicular to the central body. But the big bang caused all particles to be moving parallel, more or less, at an angle opposite from perpendicular. How could electrons possibly begin to orbit protons in this situation? Even if they slowed down enough to attract to one another, the two are predicted to latch to one another without orbits. Does a magnet pulling a million iron filings get even one piece in orbit? It's impossible because attraction does not bring the particle to itself at the correct angle for starting an orbit. The chances of getting a particle in orbit by mere attraction between two bodies is such a fat chance, yet your superior lunatic tells you that ALL ELECTRONS, if they are part of an atom, are in orbit. You can't be that stupid to believe this claim, and so, if you believe it, it's because you've never applied your mind to thinking it through. You left your mind open to attack. You became deceived. I have seen no stiff rebellion in the world against the lunatics. No one has protested for the removal of this garbage from the school textbooks.
Physicists have had the opportunities to think these things through. And yet they believe it. They think it's absolutely normal for an electron to start orbiting a proton as soon as they become near. I shake my head. I've lost confidence in my superiors. I can't trust the teachers. They have become worthless thinkers. It gets worse when we are told that one proton can attract only one electron. If that it the case, then an electron should never enter an orbit because a proton is zillions of times more apt to attracting an non-orbiting electron. In order to make the situation more possible for forming orbits, the physicist envisions electrons always racing about at fantastic speeds. He thinks that this is not a situation like a magnet pulling iron filings from a stationary position. As the electrons are constantly moving in empty space, never able to slow down (idiot), they always approach a proton in this way. But speed in itself doesn't make orbits more likely. An orbit needs the particle to approach at a certain angle together with a certain speed. The chances are far greater that the electron will strike the proton and bounce away, or swerve around the proton and continue on by.
Evolutionists are law breakers. There are laws broken all over the place. First of all, in the real world, objects that make contact with one another always cause the total energy of the two objects to be reduced. In the real world, contacts made between two things work to bring both to stillness. But the electron is a magical little thing, never coming to stillness no matter how many times it strikes another electron or an atom. This breaks the law, and evolutionists have no right to teach it. Secondly, what law claims that a central body can have only one orbiting body? There is no law. Planets can have many moons, and each one is independent of the other. Who says that one proton can only have one orbiting body? Why should that be so? The physicist says it's because the proton and electron have equal charges. But so what? That's no reason to claim that only one electron can be attracted. Why should a second electron not be attracted into an orbit just because the first one has the same charge as the proton?
Besides, if both particles were from the destructive big bang, how did they ALL ever come to have the same charge??? They say that an electron is very small compared to a proton, yet both are assigned the same force level. Does this not seem to you like a thing that something like a demented warlock would claim, from some illusion affecting his brain so that he needs a mental ward to recover? He laughs when I say this, because he is proud to have the masses believing in junk. He's a sham, and proud of it. He doesn't care for the condition of his own mind; he's not interested in reality. He wants only to push the claims of other lunatics. Often, he gets paid to do it, and must, whether he feels comfortable or not. This is your world, literally.
One captured, orbiting electron does not cancel the positive charge of the proton. An electron in orbit cannot rob or counteract the proton of its positive charge radiating in all directions. The only way for captured electrons to counteract the positive charge of the proton is for the proton to be completely surrounded by electrons, the latter radiating as much negative force in all directions as the proton is radiating positive force. Only then will electrons cease to be capture-able. Evolutionists and their spell-bound "pawns" (this is a war, after all) have some atoms orbited by many electrons so that they can come close to the situation of having negative energy radiating in every direction, yet they have the hydrogen atom, for example, orbited by just one electron, and even in this case, no other electron can come near, they claim.
Obviously, they have the wrong atomic model, especially as every atom is assigned the same number of electrons as there are protons, one electron per proton. They use this model even though they realize that protons repel protons. That is, in their view, every atomic core, aside from the hydrogen atom, has more than one proton, and to explain why these protons don't repel one another away, they invented the strong atomic force that holds them together. It's another law breaker, obviously, and they get away with this, using some off-the-wall explanation, because no one seriously protests their invasion of the minds. They really should be jailed, and they will be, make no mistake about it. They are unworthy of life. If they were yet children, or teens, they could be forgiven, but they are the oldest amongst us, and should therefore be the wisest, yet they push impossible situations as facts. And they insist that these fantasies should be taught to all children, even if parents don't like it. They would prefer to own your children more than you do, and this is your world, now, literally. The education system, in bed with the government, claims to have ownership on your children to a too-far degree.
They see the electron as very small in comparison to the proton. When they assign atomic weights, they can practically ignore the weight of the electron. They do not take the position, as I do, that the electron has no weight because it is repelled by gravity, but they more-or-less ignore the weight of the electron. They must have realized that materials, when loaded with more electrons, do not show more weight, wherefore they decided to assign atoms with the fewest-possible electrons, and they made them very tiny to boot, which explains their illogical idea that each electron speck has force equal to the comparatively giant proton. And they then went further, prohibiting more than one electron per proton (in normal situations), all of which pushes a lunatic concept that bears no resemblance to real situations that we can plainly see and experiment with.
We need to ask: why didn't they assign just one proton per atom, and why didn't they have electrons simply sitting still on the atoms? Too boring? Instead, they opted for an ever-moving atomic world unable to lose its speed. Atoms are still expressing the explosive force of the big bang; none of all that energy has died down over billions of years. All atoms together still carry all the energy of the big bang. Ludicrous? Of course. Grown men teaching such a thing becomes shameful, yet they never blush. Instead, they think they are ultra-wise. They assign the weight of an atom according to the number of protons that it possesses, even though they know that protons repel protons. They say that each atom, of all the different types that make up the elements, has a different nature based on how many identical protons it has in the core. All the protons are identical, the only difference is the number of protons. But why did they chose this impossible situation? Why didn't they chose the more-logical way, one proton per atom, each atom with a different proton?
They claim that all atoms have weights that are multiples of the hydrogen atom. I looked into this and found it to be untrue. Besides, I already knew that they were deliberate liars by that time. So, they have oxygen weighing 16 times more than the hydrogen atom (it's not precisely correct), wherefore they say that the oxygen atom has 16 protons to the one of hydrogen. And carbon atoms, they say, weigh exactly 12 times as much as hydrogen, wherefore the carbon atom has 12 protons (and 12 electrons). Nitrogen, 14 protons and 14 times the weight of hydrogen. Helium: 4 protons and 4 times the weight of hydrogen. And so on, they feed this rubbish to the students and the next-generation physicists. Even if nitrogen weighs 14.25, or 14.33, more than hydrogen, they will put the number at 14 exactly so as to push their every-atom-is-a-multiple-hydrogen-atom theory. In their view, an oxygen atom is 16 hydrogen atoms all held together by a "strong nuclear force."
Why did grown men chose demented ideas when logical ideas were plainly before their eyes? Well, when you start to think about their premise, the big bang, all the answers to these questions become apparent. They had to tailor everything to their holy-cow bang. If it's already hard to believe that the big bang would produce all protons exactly alike, it far harder to believe that the big bang would form zillions of oxygen replicas, zillions of nitrogen replicas, etc. It was best if they could stick to identical protons and electrons, and try to leave it at that.
But why did they have electrons constantly in motion? If they didn't, they wouldn't have needed their orbiting electron? It makes them look like silly old men to take on such a view. Why did they? Because, at the time of the electron's discovery, they were doing away with defining heat as a substance, and turning it into the motion of atoms. They badly needed to have atoms in constant motion, and, in order to teach this, they needed to break the law, saying that atoms making contact always share their speed-energy and therefore never slow down. Why did they do this on top of their other wackiness? Because, they needed all atoms to attract one another, otherwise the big bang could not form stars and galaxies. Therefore, they had to claim that electrons never stop moving, just as atoms never stop moving. That cup on your desk has atoms that never stop moving, even though the cup's atoms are confined to the same spot. The atoms are ever-vibrating, they say, and, once turned back into a gaseous state, the cup's atoms will be free to roam again.
How many things do you know of that, if put into vibration, will continue to vibrate forever? What stops a guitar string from vibrating? Not gravity. Not air friction. The string loses energy each time it flings to one direction because the atoms in the string are part of a solid material. Solids have bonded atoms tending to resist the bending of the material. Just like your cup. It isn't bouncing around on your desk, but it would be if every cup atom was in constant vibration. Atoms in a solid material cannot keep on vibrating because atomic attraction works against motion. You cannot have a magnet that pulls a moving metal marble where the marble keeps on moving once captured by the magnet. The attraction of the magnet makes the motion of the marble stop. Attraction is in one direction only, and causes the attracted items to cease motion because it's constantly pulling in the same direction. It will cause a vibrating atom to decelerate with each-passing microsecond, which seems plain enough, but the evolutionist becomes demented by the need to keep his holy-coy view of the atomic world. He will teach obvious error for the sake of cosmic evolution. A guitar string ceases to vibrate because atoms are pulling on one another. The finger stretches the atoms apart when it pulls the string, but the atoms will cease the string's motion because they pull one another in all directions (attraction is in one direction even though an atoms attracts in every direction because each thing under attraction is in only one direction).
The physicist would say that atomic vibrations are taking place on such a small molecular level that you can't feel them, and neither should your cup jiggle on your desk for the same reason. And you would believe the physicist instead of me because you believe he knows better. The fact is, there's a lot that makes no sense about what he claims, and yet people still believe him. I say that it doesn't matter that atomic vibrations are on a microscopic level because, if it's happening to every atom in the cup, the situation ceases to be microscopic, and becomes macroscopic. If it's merely the atom doing it while very atom is doing, the cup will be doing it too. The microscopic vibration, multiplied by the great numbers of atoms in the cup, will result in noticeable vibration of the cup. If there is enough vibrational energy in the cup that, if it were turned to gas, the atoms would all fly at hundreds of miles per hour (their claim), you need to be a foolish old man to think that cup wouldn't vibrate / jiggle / bounce across your desk while in the solid condition.
So, because evolutionists needed atoms in attraction, they opted for the never-ceasing atomic model. And they call the end of the universe a heat death because they define atomic motion as heat. Some will claim that atoms are losing energy ever-so-slowly so that the heat death is inescapable. But this is lunacy. The reason that they have atoms in constant motion is because they plainly see gases acting as though gas atoms are inter-repelling, and, oh-no, if atoms all inter-repel, how could the stars form from the big-bang explosion? All the material flying constantly further apart could never come together again if atoms were repelling one another. The most that could take place is the formation of atoms, as outward-flying protons attracted electrons, but, if the atoms did not attract, they would all keep flying into infinity, unable to form stars or planets. Oh-no, the goons needed to find a way to have gas atoms attracting one another, even though it seems that helium atoms in a balloon are all inter-repelling.
If they were not so illogical, I wouldn't have called them goons. They were goons because they could plainly see the inter-repelling reality, and yet they opted for inter-attracting gas atoms. They developed the idea that gas atoms were flying too fast to attract one another into a bond until their speed was slowed sufficiently. Once their speed was slowed sufficiently, they would bond by inter-attraction and thus form liquid. Convenient for their theory, the slowing down of gas atoms was defined as cooler gas. Hence, as gas cooled, the atoms slowed, and were permitted to capture one another, thus forming liquid droplets. It can sound logical, but they had no choice but to claim that the atoms of liquid droplets continue to move forever, now in vibration, so that, once heated, the liquid would turn to gas and see the atoms flying in all directions, again, at hundreds of miles per hour. No change in the energy of the gas atoms occurs by being turned repeatedly into a liquid or solid. As this is impossible, why didn't they just opt to view gas atoms under inter-repulsion? Because, they had opted to being God-hating goons. He was driven by hatred toward the Creator. He did not want the Creator in his universe. I know no better definition for a darkened goon.
So, what do you do with electrons when you claim that microscopic particles can never slow down? How will the proton capture the electron if the latter can never stop moving? Ahh, they were forced to invent the electron orbit, silly old bats. They really should be jailed, and they will be, make no mistake about it. They have sacrificed their freedom in life, and life will be taken from them. Why didn't they say that electrons were caught and forced to vibrate rather than orbit? Because, if the electron were to be captured by the proton in the ordinary way, the electron would cease to move. We imagine an electron crashing into a proton, and bouncing away but not hard enough to escape. It then falls back down to the proton and bounces again, and again. This cannot go on forever in a continuous vibration, obviously, because the electron would slow down on every upward bounce, just as a ball cannot bounce forever as gravity pulls it from one direction only.
If electrons cannot bounce forever upon the proton, how can evolutionists have atoms in vibration upon one another? isn't an atom bouncing up and down on another atom going to slow down too? Of course. But the atom in a liquid / solid situation has it even worse, because, on the upward bounce, it's going to strike another atom, wherefore both atoms will cancel each other's momentum. It's as simple as one atom transferring its energy to the other atom, serving to slow it down. The evolutionist has the masses duped into thinking that an atom transferring its energy to another atom will see it losing zero energy, even though they know that one water wave contacting an equal water wave will cancel both waves instantly. Ten units of force contacting on object (head-on) moving with ten units of force will slow the object down to zero motion / velocity.
When they say that energy cannot be destroyed, they trick you. Canceled energy is not destroyed energy. By canceled, I mean counteracted, and energy that counteracts energy is fully used up, not destroyed. If two atoms moving at identical speed contact head on, and then bounce away at the same speed, as the goons claim, this is tantamount to destroying energy, because they have ignored the energy that was in-coming toward both atoms. They themselves destroy it, so to speak, because they don't acknowledge it. If both atoms bounce off of one another without losing speed, no energy was transferred to either ball. But it is impossible to strike anything without the transfer of energy. Darkened goons have a hold on government-sanctioned education, which doesn't say much for government leaders.
So, the electron has bounced onto the floor of a proton. It bounces up, striking nothing on the way up. It hasn't got the energy to get away, and comes back down again. It bounces a second time, going up less distance this time, and it eventually comes to a stop. This is the real situation. Unless something causes the electron to move, it will stay motionless on the atom. Being unable to teach this view because it robbed them of their ever-bouncing atomic view of liquids, they were forced to teach the orbiting electron that maintained all of it speed. This is what their so-called kinetic theory of heat did to the atomic model: it produced the orbiting electron as well as the moron.
There can be no such thing as their hydrogen atom because a proton is capable of attracting more than one orbiting satellite. There is no law that forbids the second or third or fourth satellite. There is no law that says that an atomic core of 12 protons must have only 12 satellites. Clearly, their view of the atom is wholesale incorrect. To boot, they have the electron orbiting at zillions of times per second, and we normal people know that no satellite can orbit that fast. There is nothing predictable in the atomic world that would make things different than a moon orbiting a planet. The proton has attraction to the satellite, and the satellite goes round at just the perfect speed and angle, but zillions of times per second is definitely not the correct speed. Evolutionists, you are morons. You need to wipe your slates clean, because you are guilty. God would be proud of you to see you change. He would really take notice.
Although gas atoms inter-repel, they can form liquids when two atoms are forced to make contact. Contact of gas atoms, for example in the rain clouds, causes the proton of one atom to attract the electrons of another atom, and vice versa. This is predicted to cause one atom to sink or merge into another. The outer layers of captured electrons on every atom are hovering (for the same reason that air atoms hover in the atmosphere), and the space between electrons allows the atoms to merge. It's a fantastic design by God. It is necessary that liquid atoms can be unmerged, or we would not get rain water back into the sky. To unmerge gas atoms, the Creator made free electrons between atoms -- which is the real definition of heat -- which causes atoms to become more-highly negative. The more that free electrons (= heat) are added to a liquid, the more that liquid atoms adopt negativity, and the more that they inter-repel, therefore. When the amount of negative charge between liquid atoms is sufficient, they will overcome the attraction that bonds them, and will revert to the gas condition.
Gravity plays a part in the altering of a liquid into a gas, for as gravity repels electrons upward, there is some upward momentum of streaming particles through a liquid that helps to disconnect atoms from the top layer of the liquid, freeing them as gas atoms into space. You can really see this upward momentum at work in a liquid coming to boil. I define boiling as the point when electrons have built up sufficient speed / power to counteract perfectly all air pressure upon the liquid, as well as the bonding forces of liquid atoms. Air pressure squeezes a liquid downward, causing resistance to electron flow through it. Once the combined resistance of flow is removed fully by increased speed of upward-thrusting electrons, the latter flow into the water as fast as they flow out...which is why the liquid temperature cannot increase at boiling no matter how much heat (i.e. = free electrons) is added. This is so logical that I can believe it as a fact.
The upward thrust of electrons at boiling can be compared to people walking out a door slowly at first, where the door closes to some degree before the next person gets to the door, wherefore work needs to be done to get the door open again and again. As people start to go through the door in more numbers i.e. more per unit time, the door needs to be open progressively less...until people file out fast enough that the door is always fully open by the time they get there. Zero resistance has been achieved. It's known that, the less air pressure (e.g. on a mountain top), the less heat needed to bring any liquid to boil.
I have confidence in this view of heat because it explains what physicists call "critical temperature," which they define as the point that a substance no longer maintains a liquid state after being compressed from a gas to liquid. As soon as the compression instrument is removed, the liquid immediately disintegrates to a gas, whereas, below critical temperature, the liquid remains a liquid. I define this as liquid atoms / molecules, on the surface of the liquid, having sufficient negativity due to heat levels that they repel one another with greater force than air pressure and atomic bonding combined work to hold them in the merged condition. It's evidence that all liquid atoms inter-repel, explaining why gases can do things like hold up cars with inflated tires.
Don't you think that the science textbooks should teach this alternative view of heat as a substance, if it is a viable option for explaining heat's properties? But this idea that electrons define heat makes all substances appear to be inter-repelling, which is evolution's arch enemy. That's the real reason that this view of heat was never adopted. Some people probably advanced the idea, but it was quietly rejected in favor of kinetic heat, a falsity. Do evolutionists get anything right with the atomic model once they begin on wrong footing? Not much.
The electron=heat view insists that hydrogen atoms are the largest while metal atoms are the smallest, the very opposite of the claim by modern science. This can be realized where the lightest gases rise with more efficiency through the air than the heaviest gases. In fact, gases heavier than air sink to the floor / ground because air has larger atoms than the heavier gases. I'm saying that, the lighter the substance, the larger its atom. It then generally reveals that, the lighter the substance, the larger or more-powerful its proton. Let's imagine a single hydrogen atom, with large proton able to attract the most electrons. Envision it in the air beside a heavier helium atom with a smaller proton, and therefore with a smaller sphere of electrons. The bottom line in this theory is that the atom with the largest bottom surface is the one that gets the most lift force from upward-streaming electrons as they flow into outer space.
This theory is made far-less complicated because all atoms weigh the same. Gravity has arranged all atoms to weigh the same (see last chapter for the reason). Therefore, the level of upward force on all gas atoms is proportional with the specific area of their bottom surface. The larger they are, the greater lift they will get per any specific temperature. It's the temperature that determines the level of speed and force of upward electron flow.
The kinetic theory of heat fails us when it comes to releasing heat into outer space, for they have no mechanism for heat loss through a vacuum. This is the Achilles' heel of the kinetic theory. The speed of heat transfer through a vacuum is, according to their theory, proportional to the gas density of a space. Yet, when vacuums are made to thousands or millions of times less dense than normal air pressure, heat yet moves through them rather normally. In fact, I do not believe that there is any appreciable resistance to heat flow in a vacuum. I think this is more the evolutionist's fable than reality. Even if a vacuum offers some resistance to flow as compared to normal air, there can be another explanation besides kineticism's prediction...which is a false one. Heat needs to escape into outer space as fast as it comes in daily from the sun; otherwise the earth's atmosphere would constantly build in temperature. You don't find scientists talking about heat loss into space very much.
Ignoring wind, all gas atoms have a maximum height due to upward electron flow. When the upward force on their bottom sides equals the downward force of gravity, that's their maximum height. This is why rain clouds generally have a maximum height. Water molecules, made of oxygen and hydrogen, have relatively massive sizes, yet weigh roughly nine times more than hydrogen. These are the two factors determining water's maximum lift height.
Now, let's imagine the hydrogen atom beside the helium atom again, only this time with the kinetic theory. The two atoms are now not to be viewed motionless, but racing about, crashing into other atoms, and bouncing away in all directions equally. What will the kineticists appeal to for the lift of gases? Why should hydrogen rise higher and faster than helium? Because it's a lighter atom? So what if it's lighter; why should it rise faster and higher? In their theory, the lightest gases are said to rise highest, but they have no explanation for it, because even a light cannon ball falls to earth. It doesn't matter how small the cannon ball, even if it's a single atom, it is supposed to fall to earth, in the evolutionist's Newtonian theory. Why should a hydrogen atom rise at all if it's pulled by gravity and does nothing but crash about into all the air atoms???? Shouldn't it go downward as much as upward since it's deflecting into all directions equally?
They can't appeal to the buoyancy principle, though this is exactly what they do appeal to, trying to fool us. There is no buoyancy principle when a single atom is surrounded by pure space. Buoyancy is a principle in water, not empty space. Buoyancy has been proven to be due to water pressure underneath an object being greater than water pressure on top of it, thus giving the object a net lift force. But this is not true in air pressure because air atoms are not in contact with hydrogen atoms. They are lying to you, as they constantly do, hoping that you won't catch their dirty trick. There cannot be air atoms giving hydrogen atoms any lift in the kinetic theory, and this is another Achilles' heel of kineticism.
We need another theory to explain why the lighter gases rise in air, and why the gases heavier than air sink to the bottom of an air mass. If you put 50 gases into the same box, they would layer themselves according to, not their so-called "atomic weights," but to the sizes of their bottom sides, for it's upward, anti-gravity electron flow that gives them lift. Gravity is indirectly pushing them up even while gravity directly pulls them down. Now you have the truth because there is no other explanation, at least not one that's knowable. As far as we know, pure air is made up of air atoms and heat particles, nothing more. What else, in this picture, could give a single hydrogen atom lift, if not the rising heat?
Hotter air does not rise because it's lighter. This is the evolutionist's trick. It's his buoyancy principle applied falsely to air. He's a liar and he knows it. He fully understands what buoyancy is, and so he is a liar outright. He needs to go to jail for lying to the children, to the father and mothers. He needs to be punished for the error sown knowingly amongst mankind. He is not worthy of life and deserves death. This he will Receive. He will not escape.
If you put a ball, or a single hydrogen atom, in water, the water molecules are in contact with its underside. The buoyancy principle can now work due to this contact. The water pressure can now lift the ball or atom through water. This is why an air bubble in water rises speedily. Yes, a bubble of hydrogen in water rises, because water molecules are in contact with the bubble's atoms. The water has squeezed itself all around the bubble. The bubble will be smaller with greater depth in the water because greater water pressure squeezes it with more force. The gas pressure in the bubble is therefore equal to the water pressure. To put it another way, when the bubble pressure equals water pressure, this determines the size of the bubble. The water cannot squeeze the bubble any further, and the bubble pressure cannot push the water away any further, when they are equal in pressure.
It becomes clear that water will squeeze the bubble until the gas atoms are near enough to one another that water molecules can't fit between the pores of the gas atoms, and neither can the gas atoms squeeze between the pores of water molecules. If the gas in the bubble is below critical temperature, the water can cause the gas atoms to merge into liquid, thus making a bubble impossible, but hydrogen and air are both above their critical temperatures in a normal environment. For this reason, hydrogen and air atoms in a bubble will refuse to merge into liquid even when water squeezes around them. According to the kineticist, gas atoms stay unmerged because the speed of atoms is too great to allow merger. If some make contact slow enough to merge, along comes a fast one and knocks them apart again. But this view cannot define or explain critical temperature. If gas atoms are already moving fast enough to retain their gas state, what difference does critical temperature make?
First of all, let's address the fact that squeezed gases form liquid. Why should they in the kinetic picture? Squeeze an air bubble hard enough, and it will turn to liquid. The so-called energy of a gas -- which describes merely the speed of its atoms -- does not change when a gas is squeezed. Instead, the energy is packed into smaller space. So what? Why should this cause gas atoms to merge as liquid? If their speed was too fast to cause a liquid in the unpressurized condition, why should the pressurized condition cause them to liquefy? There is no explanation, for they are making more contacts, per unit time, in the pressurized condition, which serves only to keep the atoms more apart than in the unpressurized condition. The kineticist will argue that bringing atoms closes causes them to liquefy, but this is bogus. It is NOT the prediction of his theory; it is the fact, however.
The fact is that pressurization brings atoms into contact, and contact causes liquid formation. The kineticist is supposed to argue that removing the pressure will cause the liquid to instantly gasify. As gas atoms were too speedy to bond unpressurized, removing the pressure should cause them to instantly separate. The fact is, the liquid does not instantly gasify, unless it is above critical temperature. This proves that kineticism is wrong. Critical temperature teaches that gas atoms will liquefy upon contact with one another, and that heat above and beyond the evaporation temperature is needed to separate them instantly. In kineticism, the prediction is that merely being above evaporation (boiling) temperature will cause atoms to separate instantly. Kineticism does not have an added component by which to explain critical temperature.
To put this another way, let's bring some water in a sealed container above its boiling temperature, but not above its critical temperature. Remove the lid and see what happens. The water will evaporate faster than when at boiling temperature, but it will not instantly disappear as it will when above its critical temperature. What is the difference between the two? Clearly, critical temperature is when atoms possess sufficient heat to counteract whatever bonds them. Boiling point is able to slowly separate the liquid atoms, but only from the liquid surface, while critical temperature is a massive separation of liquid atoms throughout the liquid body. Atoms want to get away from atoms, which is not the case at boiling point. Rather, something is lifting water away from water, at boiling point. You can see the upward motion of bubbles in a boiling pot of water. Something is speedily rising, knocking water molecules out into the air.
Kineticists call them water-vapor bubbles. They lie so as not to contradict their kinetic theory, but, clearly, these are not water-vapor bubbles. They rise far too fast to be any type of gaseous bubble. The bubbles clearly have their source at the heat source. Why can't they be electron bubbles, therefore? Electrons are being forced from the heat source, into the water; shouldn't inter-repelling electrons form bubbles? As electrons are repelled by gravity, it explains the super speed of bubbles at boiling point. The bubbles increase in velocity with increasing water temperature, which is predictable where bubbles have a higher density of electrons. If there is some water vapor in the bubbles, fine, but they are not purely water-vapor bubbles. Electrons rise through the water, in the bubbles and out of the bubbles, and knock surface water into the air. The water continues to rise as steam because the electrons flow up through the air. If there were no electrons, the water molecules escaping the surface should come back to gravity, falling back into the water body, for there is no such thing as buoyancy in air (unless the air / gas is in a material balloon).
One gets the impression, from reading physics books / articles, that they don't want us to equate electrons with heat. They don't want us to get the idea that has occurred to them, that electrons can describe heat. They know this; it would be obvious to them, but they want us to be locked into their kinetic fantasy. This is the beginning of macro-evolution, and it matters very much to them that we all believe as they do. And this is why they are darkened goons pretending to be our teachers.
In the last chapter, we met Henry Cavendish, whom had obtained the gravity constant held to today. After Newton proposed that gravity was mass x mass, they yet needed to know how to describe gravity for the math. Cavendish got a number to represent gravity, that is, and if you multiply his refined number today by the number claimed for the earth mass by NASA, you will get 398,591,302,000,000 as the result. If you then divide the latter by the distance (in meters) squared, above the earth's core, this will get the power of gravity at any given height. When we square the height of the ground (6,371,000 meters) above the core, we get 40,589,641,000,000. When we do 398,591,302,000,000 / 40,589,641,000,000, the answer is 9.82002. You can find online that gravity is assigned 9.8 m/s2 at ground level.
Cavendish set the gravity-constant number in 1798, after the 93-million-mile hoax was under way. The page below is titled, "HOW IS THE MASS OF THE EARTH DETERMINED?" It appears to show circular reasoning, which agrees with my accusation in the last chapter, that they fixed the earth mass x gravity constant as a certain number that purposefully gets the observed results of observing falling objects. It says: "Cavendish was the first person to determine Newton's gravitational constant and accurately measured of the Earth's mass and density." If he already had the earth density at that time, it wasn't because he put it on his weight scale. He had no idea what the earth mass was, but as the gravity constant hasn't changed since then, clearly, this is a hoax, because the earth mass can't change either, if the gravity constant can't.
The numbers were cooked, in other words. You can make this out in the page's explanation where it first says: "Galileo determined that the acceleration due to the force of gravity of Earth was a constant equal to 9.8 m/sec2 near the surface of the Earth." Galileo lived before Cavendish, and so it's easy to see that Cavendish was working off of Galileo's number. Cavendish merely had to chose two numbers that, when used in the formula, gravity constant x earth mass / distance from earth's core squared, gave Galileo's number of 9.8.
So, how was the earth mass discovered according to the page above? It was discovered from the math, like so:
M = acceleration x distance square / gravity constant,
where acceleration = 9.8m/sec2, distance (earth radius) = 6.4 x 106m, and G = 6.67 x 10-11m3/(kg sec2).
But wait. In order to get 9.8, one needs to use the earth mass in the math. How can one then use 9.8 to get the earth mass? That's circular reasoning. So, the question becomes, how did Galileo get his 9.8 figure?
At Wikipedia's Earth Mass article, under the section, "History of measurement: "The mass of Earth is measured indirectly by determining other quantities such as Earth's density, gravity, or gravitational constant. Modern methods of determining the mass of Earth involve calculating the gravitational coefficient of the Earth and dividing by the Newtonian constant of gravitation, M = GM / G." In my opinion, GM / G is meaningless. It looks like a trick. Of course GM / G is going to equal M as the G's cancel one another out. So what? I can say anything just like that. For example, I can say that earth mass is equal to potatoes x earth mass / potatoes, and the answer is absolutely correct. What joke is this? What does it mean to say that the earth's mass is related to the G constant? Who says so? Why doesn't the math, G x M, serve to reflect the moon's acceleration?
At the page below, we find that Newton was already tinkering, long before his fan, Albert Einstein, with the idea that the solar system has no ether, otherwise, he thought, it would shake the laws of planetary orbits as he had defined them. He says that if anyone can prove that this ether does not obstruct planetary orbits (i.e. with friction), that he would not object to it's existence. However, this was his gentlemanly way of not insulting those who held to the ether view.
Newton was responding, in particular, to one who believed that the excitement of particles in the sun excited the ether in forming what we see as light. Light exists in the eye only, but outside of the eye it is nothing more than captured electrons striking free electrons, the latter representing the ether. Atoms are said to be "excited" when light shines upon them, meaning simply that excitation of atoms at the light source passes the energy of excitation through the ether until it strikes atoms and excites them too. The orbiting electron cannot be excited by contact from outside particles. Instead, the orbiting electron crashes because it must go out of orbit when struck by a particle, especially if the outside particle is traveling at 186,000 mph. There simply cannot be orbiting electrons because orbits are delicate things. The reality is that electrons hover over a protonic surface, and they bounce about when excited. If they do not have escape velocity, they bounce back toward the interior of the atom. It's so simple, but not good enough for the lunatics, starting with Einstein. His great achievement was to discover that electrons do escape atoms when light shines on substances. It could be expected.
The atoms quickly reload, however, and you have got to be smart enough to realize that electrons don't go back into orbit as atoms reload. Light shines on everything under the sun so that there are electrons escaping all over the place. Do you think that all the escaped electrons are replaced, in every case, with orbiting electrons? What kind of fantasy infected Einstein's mind? Why did he allow himself to have this ridiculous view? There must have been a hard reason for him to have taken this extreme and impossible scenario as reality. Peer pressure from evolutionists.
Table of Contents