QUANTUM SPACE ELEMENTS

Peter Vass

Private Research

 

 


 

One cosmological system is proposed which is cyclic, deterministic, and infinity-free. The necessary absolute frame is derived through a set of assumptions designed to eliminate infinities. Quantum noise deteriorates into determinism if incremental processing is executed in the interaction of information and under undetected time-steps. In this case relativity preserves traditional math over discrete physics. Gravity is discussed as being information linked to massive hosts. Gravitational acceleration is treated as the product of asymmetry in the interaction between vacuum energy and gravitational fields. The system is cyclic and allows creation to be reversed by emerging from within. This is examined in the context of perturbation mechanics embedded within any thermodynamic system functioning under internal freedom over adequate elements and time steps.

 


 

ASSUMPTIONS


i. Space is discontinuous. It breaks down to indivisible space quanta, the space elements.

ii. Space elements are stationary. They accommodate and transfer information. 

iii. Each space element can only accommodate a finite amount of information.

iv. Information transfer is not instant. Processing is required.

v. Processing is inefficient. It can be executed for a very large but finite number of repetitions.

 


CONSEQUENCES  


Time is a rate of change. It is the rate at which the properties of space elements are modified according to the information they receive or transfer. There are two components of time: observed time and processing time. Observed time (t) is experienced through life and recorded through instrumentation. Processing time (ΔΤ) is undetectable. The flow of time is absolute and defined as the sum of t + ΔΤ. Depending upon properties of events absolute time consists of different ratios between its observed and processing components. Since ΔΤ is undetectable perception of time relates only to the observed component of time. Perceived time is therefore relative.

 

Motion cannot exist in quantum space; there can only be state of presence and intention of motion. This realization introduces frequency. Quantum information can only propagate in quantum steps defined by the frequency of the universe. Early universe resonated in considerably higher frequency because it had not suffered deterioration due to its own inefficiency. The flowrate of absolute time decelerates through cosmic ageing.

 
Space elements are stationary providing the reference grid for information to exist within. Nothing exists outside space elements and no distance is necessary between them. What defines the individuality of each space element is time, its own rate of change. Quantized time may be described as a layer of resistance (delay) in the propagation of information. This layer surrounds each space element, defining its existence. Space geometry translates into time geometry.

 

Kinetics  

The following diagram simplifies quantum motion in a synoptic graphical format:

       

The slow-moving traveler “X” ages 16/19 pulses while the fast-moving traveler "Y" ages 13/19 pulses. In both cases the total absolute time is 19 pulses. If the two travelers had synchronized clocks, then at t=19 the faster moving traveler would register slower time flow. The introduced transition time between events (the new time component), is of such microscopic order of magnitude that only becomes apparent in the special condition of events occurring at a frequency which is fast enough to be comparable to the universe’s frequency. The diagram below displays the perception of time by travelers at various speeds in respect to the absolute frame of reference (the space grid) and in respect to each other. This diagram plots velocity over distance travelled and so time is inverted (1/t).

Perception of time is modified with velocity because processing becomes increasingly relevant. Since processing remains undetected, every motion within the SE grid introduces its own reference frame which in turn needs to be consistent with every other reference frame, because there is an invisible common ground that bonds them all together, the absolute time flow in the absolute reference frame of the quantum space elements grid. As a result, space-time relativity manifests at those velocities as a geometric necessity and the following is re-derived:

t’/t = 1/(1-v 2 /c 2 ) 1/2 = γ

The flow of absolute time follows the frequency of the universe. Absolute light speed is motion at that frequency. All pulses become processing events and no observed time registers with the traveler. Light speed in vacuum is always observed at 3x10 8 m/sec since the speed differential due to motion relative to the absolute frame is always cancelled out by the detection methods used (two-way observations). This is the maximum allowed speed since no events can materialize faster than the frequency of the universe.

 

Problem 1: Acceleration

There is a problem with this description of quantum motion. It implies that acceleration cannot be linear, especially at relativistic speeds. In fact, the final step of acceleration needs  to jump from 0.5c to 1c without any middle ground. One possible solution involves motion of complex information clouds (particles are very complex systems compared to individual space elements), instead of single information bits traveling between specific space elements. Such complex motion translates closer to a linear event because of the vast number of space elements involved in transferring the information along the trajectory, producing fuzziness. The mechanism remains unclear, but it might be of similar nature to the mechanism which translates the fuzziness of particles into the observed linearity of macroscopic events.

 

Problem 2: Spatial geometries and irrational numbers

Irrational numbers have always been a problem with granular space. The sides of the square may be whole numbers, but the diagonal can never be since it relates to the square root of 2. This problem can be solved by flipping the coin to its other side. If whole numbers are nothing but a special case of irrationals, then all of nature speaks approximate math and only humans find pleasure in the illusion of precise mathematical language. This has already been suggested by quantum mechanics, but uncertainty (which will be discussed later) is not enough to explain the issue. The fundamental solution exists within the mechanism of motion and not within the consequences of it. As noted in the previous problem, existence is fuzzy because of the large number of space elements involved in defining any physical reality, no matter how small reality gets.  The motion of any physical entity (including energy) is the aggregate motion of its individual information bits. Unless ideal conditions are considered (information bits of individual space elements), only irrationals make sense. The mechanism continues to be deterministic because it is physically and theoretically indivisible. Beginning and end points are fixed, making the path irrelevant.

 

Notes:

a) Traditionally assumed light speed invariance forces a photon traveler to exist instantly everywhere and forever. This is obviously unacceptable. The physical interpretation suggested by this essay is that time dilation is forced into the perception of travelers without it being an actual physical occurrence. Undetected time quanta introduce an unrealistic perception of events.

b) It could be argued that processing time between events had always been inferred by the expressions of special relativity if looked upon through the following angle:

γ = t/(t- ΔΤ) → ΔΤ = t(1-1/γ).

 

Potential

Every massive body is a cloud of information travelling through the stationary quantum space grid. This information cloud is not confined within the boundaries observed or detected as mass but extends to space elements away from it. Information density diminishes with distance and gravity weakens. Within proximity to the massive host increased information density demands more processing time for events to be resolved. Time is absolute and so further processing manifests as slower rate of change. The ratio of t and ΔΤ within absolute time is affected. Macroscopically this is observed as slower time flow relating to stronger gravitational fields.

 

 

In the presence of two massive hosts the force of gravity is the consequence of asymmetry produced on each of the two gravitational fields because of superposition of information between the hosts. This asymmetry of time-flow rate is a region of diminished ΔΤ because of positive superposition of gravitational information within the region. The consequence is a region of negative vacuum pressure between the hosts which generates gravitational acceleration. This suggests that massive hosts do not move towards each other because of geodesics. They are not even attracted to one another. They are being moved by external energy interacting with them. Asymmetry relates to the gradient of time-flow differential caused by the superposition of gravitational information. The external energy is the origin of ΔΤ. This external agent is responsible for accelerating the hosts in the direction of asymmetry.

 

 

Treating the force of gravity as the consequence of superposition of information introduces the concept of active fields. According to this thinking, the force of gravity is not a direct consequence of mass. It is the consequence of interaction between the gravitational fields. This means that gravitational fields possess energy which can be translated into matter. This suggests that the total mass of the host/gravity system is the sum of the host mass plus gravity’s own matter density. An experimental method quantifying the proposed matter density embedded within gravity is discussed in the verification section of this theory.

 

It is noted that gravitational matter density is not mass . It is the consequence of quantum space responding to gravitational information in a manner that can be mathematically treated as being matter density. Such understanding requires gravitational fields to be finite and to extend out only to the point where space cannot detect further information differential. This is the region in space where gravitational information linked to any host is no longer diminishing because it has reached the threshold of detectable differential, the quantum gravitational edge. This threshold is a property of space which relates to the noise generated by the space grid itself - the space vacuum - due to its primordial quantum oscillation which defines the flow of absolute time.

 

[The rest of this discussion on “Potential” may be problematic]

Gravitational matter density variation with distance from the host is expected to be analogous to the variation of gravity inside a symmetric spherical entity whose density thins out from the center at the inversed-square law. The math should be similar. According to textbooks of Galactic Dynamics (in this case astro.utoronto ), the potential inside such an entity at distance r­ 0 from the center, and for α = 2, is provided (by equation 3.54) as Φ(r) = 4 π G ρ 0 r 0 2 ln r, where ρ 0 is the matter density at r = r 0 and r in this proposal is assumed to be the distance to the edge of gravity (which is suggested to be finite). This is actually a special case of density variation which produces a flat rotation curve. Interestingly, observed galactic rotation curves are approximately flat. Furthermore, data from 21cm radio observations using atomic hydrogen gas emissions ( astro.umd , page 89) provide strong evidence of flat rotation curves out to radial distances where star density is very low. Traditionally understood gravitational strength and dark matter halos both seem to follow the inversed-square law. This is either an exotic coincidence or science. This proposal assumes this is evidence of science. This theory proposes that the additional galactic mass inferred by observed galactic rotation curves is the mass equivalent of gravity itself.

 

Problem 1: Infinities

It is understood that inversed-square law density profiles are called singular isothermal sphere profiles. Such profiles produce infinities in central density and total mass (page 7, astro.princeton ). The previously cited source defines the total mass (in expression 12) as: Mr = 4π ρ 0 r o 2 r . Therefore, density ρ 0 diverges to infinity as the radius approaches to zero: ρ 0 = M r   / 4π r o 2 r . This theory has already proposed that gravity is finite in terms of the distance from the host mass (quantum gravitational edge). This suggestion produces the necessary cutoff to the value of r, and therefore the total mass remains finite. The region of the theoretical infinity at the center of any gravitational well (and not just in galaxies) is occupied by visible mass, and so massive hosts eliminate the anticipated infinity at the core of gravity.

 

Problem 2: Preferred environment

Another problem is the fact that flat rotation curves are only observed in galactic halos and only at great distance from the galactic center. If gravity was made of matter density as a straightforward mathematically precise entity, then every object would produce the exact same properties within its own gravitational field. For example, satellites would orbit Earth in a flat rotation curve, which is clearly not the case. Evidence of dark matter is observed only within extremely weak gravity, as if this condition defines some preferred environment. Perhaps this leads to the suggestion that it is only in the case when gravitational information is of low enough density that its interaction with vacuum becomes observable, and this interaction visibly distorts traditional physics.

 

 

 

Vacuum

According to what has already been discussed, vacuum energy is reflected through both potential and kinetic events. For it to be quantified, both events need to be considered.

The total available vacuum energy in the universe inferred by potential events can be estimated in terms of the ratio of empty space to the physical universe’s average density per unit of volume. This theory proposes that dark energy does not exist (and suggests time-flow deterioration) and that matter density is the sum of the traditionally understood visible matter/energy plus the invisible estimated dark matter. Thinking in terms of protons, for a radius of 0.85x10 -15 (precision is not necessary), the approximate volume is 4/3πR 3 = 2.6x10 -45 . Considering an average cosmic density of roughly 1.7 protons per meter cubed ( NASA ), this ratio is close to 2.2x10 44 which is an order of magnitude of 10 44 . Absolute light speed (c) is 3x10 8 m/sec. If the accepted value of minimum quantum length (λ = 1.6x10 -35 m) has physical meaning, the degrees of kinetic freedom per unit of time are c/λ which is about 1.8x10 43 . Again, this is an order of magnitude very close to 10 44 . Average cosmic density is estimated through macroscopic observations of the largest possible scale. Kinetic freedom is defined through the smallest thinkable scale. The two considerations are unrelated. If the numbers deduced are indeed correct, and if there is no hidden loop in the thinking that produces them, it is interesting that the two orders of magnitude are almost identical. Current observations imply that matter density in potential is in perfect agreement to the available degrees of freedom in kinetics. The diagram below visualizes this concept:

 

 

The relation between vacuum energy and physical existence should be expected to hold on all scales; it should be dimensionless. The presence of physical existence within any unit of volume would allow for less kinetic freedom within the same volume. It is the consequence of this suggested principle that is inferred by the traditional field equation of GR. Even though this proposed interaction may have already been derived, the following assumptions are not part of the traditional thinking:

 

- Space is absolute and therefore it can be treated as a constant.

- Time is also absolute but it has two components which are relative to each other.

- All cosmic processes are inefficient.

 

According to the above, and within our conscious frame of reference, gravity is the consequence of time-flow rate differential within the vicinity of a massive host. The strength of gravitational fields relates to the gradient of time-flow differential. This is the result of superposition of all information within the region, combined to the motion of the host and its field in respect to the absolute reference frame of quantum space. This is because motion also affects time-flow rate as discussed earlier.

 

Particles

Information traveling through time geometry (and not through the space elements grid) is perceived to cross unlimited distances instantly. However, information communication is never instant because processing time is always necessary. This is undetected by instrumentation and therefore instant action is observed.

 

 

Momentum is a vector. It cannot be defined through one observation alone. More information is necessary. However, if two consecutive observations are used, information on location deteriorates because present time dilutes into two separate events. This problem can be resolved if additional information is hidden within the original unique observation. In this case present time (t) would be defined as (t + ΔΤ). Since processing events are undetected, observed reality is not diluted. The following diagram provides indicative visualization:

 

 

Every occurring event modifies the properties of the space elements affected by that event. To understand an event, it is necessary either to compare the present state to its immediate past, or to its immediate future. The change of state observed defines the event in question. Comparing to the past is straightforward because the past has already occurred, and therefore its nature is deterministic. Comparing to the future is tricky because the future allows for choice and so it involves uncertainty. The consequence of this realization is that any single event can be at the same time either deterministic or stochastic, depending on the observational method used for detection. If the particle’s location information is extracted upon the slit screen, then the patterns observed on the detection screen reflect the state of this particle immediately after having executed processing of its available choices (after executing ΔΤ), and as such, the observation upon the detection screen is deterministic. If the location information is not extracted, then the patterns on the detection screen reflect the available future choices of the particle at the moment it detected the slit screen (before executing ΔΤ). Since there is choice, there is uncertainty. This uncertainty is observed upon the screen.

 

 

 

Law of efficiency (thermodynamics)

The natural flow of events is always the one which demands less processing time (warm to cold). This is universal and can be described as the law of efficiency.

 

Space interaction mechanics

During the state of vacuum individual space elements interacted only within their own vicinity. Simple blocks of information with locked properties (first generation particles) allowed for greatly enhanced efficiency by diminishing the total required processing time (and therefore the necessary energy) to resolve the universe’s events. Instead of many individual interactions between space elements, a considerably smaller number of events between locked information blocks had to be processed. This event of enhanced efficiency propagated throughout the cosmos and ignited the creation event which brought the entire visible universe into existence. It was a cold start producing very little energy per unit volume. The temperature of birth was very similar to today's observed cosmic microwave background radiation because space does not expand and so it does not cool significantly. The structure of this cosmic event relates to patterns of thermodynamic efficiency propagation. This structure is still visible today in the form of large-scale baryonic distribution observed as galaxy filaments. This type of structure is typical and commonly observed as the density distribution of thermodynamic efficiency in large enough systems. As the evolution of the cosmos progressed, the frequency of the universe deteriorated by continuously supplying processing energy into the system. Retardation introduced growing stability. This allowed for more complex information blocks to develop (next generation particles), which introduced further efficiency. The pattern repeated itself until information within space elements eventually condensed into massive particles. Such information clouds were forced to break away from typical behavior because their reach extended beyond their immediate vicinity through their secondary clouds, their fields. As such, vast regions of space acquired pre-defined properties directly linked to their hosts, allowing for advanced efficiency in space interactions within those regions.

 

Forces

The direction of any fundamental force vector is the one which – if followed – would lead into the optimum path of diminished processing. Attraction occurs if the information properties of the entities involved are superimposed positively. Repulsion occurs in negative superposition. Magnitude does not relate to the properties of the confined volume of space occupied by the host, but to the properties of the total region affected (host plus field). The magnitude of forces relates to the gradient of saturation differential within the region affected.  Forces are finite because fields extend out only to the point where space elements cannot detect further saturation differential, as discussed earlier with gravity.

Nuclear forces are of the same origin. Since nature can accommodate a large number of orders of magnitude smaller to known particles, it should not be unrealistic to assume that sub-atomic particles possess (or indeed are themselves) individual fields and, depending on their gradient of information dissipation with distance from the host, they produce similar time-flow differentials which are manifested as nuclear forces. The properties of those forces depend on field superposition of all sub-atomic hosts (entities) within the affected region.

 

Inertia

Fields, the secondary information clouds of complex particles, act like decelerating canopies. They force their hosts out of maximum oscillation and away from light speed. This is because, as already discussed, field properties extend out and introduce a secondary information cloud linked directly back into the host. This adds complexity in processing events within the host/field system and therefore the host's motion breaks out of cosmic frequency (out of light speed). Any further change of state is resisted since it would require additional processing. Demand for processing is against the law of efficiency and as such it is manifested as resistance. This is understood as inertia. If gravity (or EMF) would disengage from its host, then the host would lose its inertia, behave as being massless, and would travel at the speed of light.

 

Life

Life is either a direct consequence of Physics or it has been injected into the system by an external agent. This theory assumes the former. In the common understanding that physics must be the same everywhere and that no cosmic region is more special than others (homogeneity), it is proposed that life must be embedded within the entire universe in terms of some primordial property. This property remains inactive, or perhaps exists in some kind of ground state which remains invisible until it manifests when specific conditions are met. Such a ground state would be the origin of life. The mechanism which allows life to materialize out of its primordial condition is birth .

 

Nothing in the universe is fundamentally macroscopic as far as science understands. Nothing, except life. Single living cells are small, but they are not particles. Yet, the dominant property of life differentiating it from all other macroscopic events is its stochastic nature. Life is noisy because of choice, which means because of processing. All of processing is hidden making life look stochastic to an observer, but in the frame of internal processing all actions are predictable. One external agent with access to all internal processing would understand life as a purely deterministic system.

 

Pictured below is a composite of two different thermodynamic systems. One of the images is the structure of matter distribution in deep space galactic superclusters. The other is the structure of density distribution of intelligent life on the face of a planet and within a region of relatively flat topography. Those images are 3D and curved 2D respectively. If it can be assumed that their planar projection pictured below preserves an accurate reflection of the physics lying underneath, then, if those structures display similar characteristics there should be some common principle behind them.

 

 

( Click to view the original images )

 

The observable structure of intelligent life was created by energy dissipated into work done by intelligent hosts. This work is still being produced to sustain and further enhance life’s entropic minimum as reflected in the complexity of the construct (civilization). The process generates heat because of inefficiencies. Heat warms the immediate environment of life, the atmosphere. Intelligent life can be understood in terms of a straight-forward thermodynamic engine. This mechanism is perfectly aligned to natural events.

 

Life events seem to be aligned to natural events in both orders of magnitude. They are noisy until they accumulate into large systems. Those systems then behave predictably and follow the laws of thermodynamics, exactly as natural events do. The missing scientific link between natural events and life events would be cosmic processing. Observing the existence of cosmic processing would perhaps begin to depreciate the gap between life and physics. Reminding readers that this is an absolutely atheist theory (events are being resolved locally without depending upon anything outside their vicinity), the following definition of origin can be suggested:

 

The origin of life is the uncertainty embedded within all microscopic events, apparently disrupting the deterministic nature of the universe because of hidden processing behind it.

 

Extracting the units out of the progression, the universe is 14, the sun is 5, the earth is 4.5 and life is 4. This is the same order of magnitude. It is peculiar how quickly life begun at the very early phase of our planet. Other than this having been a chance event, in which case there would be no physics hiding underneath, this theory argues that the opposite is true. The life event could not have materialized unless there was a primordial mechanism persistently exploiting the environment for that necessary window (conditions) to erupt. The physics responsible for all primordial progressions in nature, including life, is the process by which order is created seemingly spontaneously out of noise, but in fact controlled by a deterministic mechanism which remains invisible. If time flow is visualized as a progression of well-defined discrete events, then creating order out of noise would be about bridging different layers of thermodynamic efficiency. This is achieved by allowing an adequately large number of perturbations in the interactions within the system, always under conditions of internal freedom whereby external agents do not interfere.

 

The diagram to the left is only indicative. It visualizes a sequence of perturbations (time steps) in a system that leads into the birth of an event of enhanced efficiency compared to its ground state. The event produced is itself a system; a subset of the ground state which created it. There is always some minimum threshold of local efficiency that needs to be reached if that event is to be sustained and further enhanced by locking its properties against deterioration. This is achieved through the typical function of thermodynamic engines where energy is used out of the immediate environment, usable work is being produced, while inefficiencies are always present. The red spike on the diagram could have been any of the following events:

In the case of the life event (3), the long-lasting cosmic equilibrium of solar systems shields this fragile process from cosmic influence. Liquid medium further protects the process from local fluctuations, keeping the conditions as flat as possible. This is necessary because internal freedom is essential for the considerably large set of local level molecular perturbations to be executed before thermodynamic efficiency is spontaneously achieved (life). It is then a matter of thermodynamic efficiency optimization (evolution).

 

Definition: Birth is a probabilistic event created out of a deterministic natural mechanism which spontaneously produces order out of noise within any thermodynamic system given enough elements and perturbations in a state of freedom.

 

Notes:

a) Determinism : This mechanism is understood as deterministic in the sense that every single one of the innumerous perturbations is a perfectly defined state of the ground system within its own frame. The enhanced system produced cannot understand this progression as deterministic because the incremental low-level processing behind those perturbations remains invisible to it.

b) Freedom : Internal freedom allows for external energy to be fed into the system if its flux remains constant, thus without producing inhomogeneity. The external energy source is necessary to sustain the progression or otherwise it deteriorates.

 

Existence

Existence is allowed only if it can be reversed. The idea that a positive number can exist if an equal but negative number also exists is incomplete. According to this theory 1 + 1 = 2 only if the universe is a closed system, if all processes are perfectly efficient, and if life does not exist. The first assumption may be reasonable, but the other two are not. This makes the math imprecise or two new variables are necessary. Processing is negative because it is inefficient, and life is positive because it is super-efficient (in the frame of nature). Thus, it is life which facilitates existence by allowing the system to be reversed because creation emerges from within. The progression mentioned above (Birth) is missing its final step. The necessary ground state of life before it breaks out into the final layer of thermodynamic efficiency requires that billions of civilizations must exist simultaneously and only one will generate the thermodynamic spike and break out. There is some indirect observational evidence supporting this idea, but the issue clearly remains speculative:

- Abundance : The universe exists in an over-abundance of energy/star systems. There is no apparent reason for this.

- Repetition : Cosmic events produce a repetitive pattern of spherical heat sources (stars) and their orbiting companions (planets). There is nothing more, only variation of size and age. Ignoring the magnitude of it, the universe is one very dull, repetitive entity. This condition is conveniently flat and able to accommodate the discussed perturbation mechanics.

- Heat : It appears that there is always one handy energy source which is flat in terms of flux, and whose timeframe adequately accommodates civilizations' time spans. The local star.

-Life : At least one exotic low-entropy complex living system exists and we know this to a certainty. We are it.

 

 

 

VERIFICATION

Absolute reference frame

If it can be experimentally confirmed that the force of gravity fluctuates relative to Earth's rotations through space, this would indicate the existence of an undefined absolute reference frame. Fluctuations should be aligned with Earth’s cycles. All the following conditions should be confirmed as periodic gravitational fluctuations: 

-      Earth’s rotation around its axis

-      Earth’s rotation around the Sun

-      Axis rotations relating to both the above

 If measurements of the gravitational constant G can be accurate enough, and if they can indicate such periodic fluctuations outside the margin of error of instrumentation, then this would account as an  indication of validity  for the Quantum Space Elements proposal ( click for animation ).

 

 

Time Geometry (quantized time)

The necessary transition time between kinetic events manifests at velocities close to the speed of light. This is because only at that speed t diminishes and becomes comparable to ΔΤ. When those two components of time get to be of similar order of magnitude, the apparent linearity of events (the continuous flow of events) should break down. Experimentally observing events materializing at such special conditions should produce a non-continuous flow. One experiment measuring the actual speed of accelerated particles (the actual distance travelled over time and not the inferred speed through energy gained), it would at some point observe further acceleration in quantized steps as opposed to a continuous flow asymptotic to light speed. Such an observation would produce evidence of quantized time and ΔΤ, the undetected component of time.

 

 

Active Fields (Inertial manipulation)

This experiment investigates the effect of field superposition on the inertial mass of a spherical host. According to the proposed concepts of this essay, inertia can be defined as the consequence of either gravity or electromagnetic field, or both if they exist simultaneously and in connection to the same host. Each field would introduce additional information into the space elements affected and therefore would increase or decrease the total processing required to resolve space interactions within the fields, depending on positive or negative superposition of information. This would manifest as an increasingly delayed or more immediate response to any change of state of the host and therefore as a measurable change in the host’s inertia.

If the spherical mass hangs freely from a spring, the strain measured at rest relates to the passive gravitational force applied on the sphere and its gravitational field by Earth's gravity. If a brisk upward force is applied to the sphere through that same spring, the strain gauged before the sphere begins to move upwards would be proportional to the sphere's inertia. Repeating this experiment at different levels of charge and until the sphere is fully charged, it should produce an observation of a gradually increased or decreased inertial strain depending on positive or negative field superposition. In the case of successfully executing this experiment and if observed results are aligned with the above anticipations, then deeper understanding of quantum space properties and interactions would allow for inertial manipulation of massive hosts by artificial fields. The energy required to eliminate the inertial mass of a neutral (not charged) host would be equal to the energy of the active gravitational field generated by the host. It would be reasonable to assume that the total active gravitational energy of the sphere is equal to the effective energy of the induced electric field (total energy fed into the sphere minus experimental inefficiencies) multiplied by the ratio of the inertial differential observed.

 

Active Fields (Vacuum thrust)

This is a modified version of the above experiment . If the electric field is not perfectly symmetric while the sphere remains a unified entity, for example a sphere of variable shell thickness instead of a sphere fragmented into sections of various conductive materials, then the introduced asymmetry should manifest as force acting upon the sphere producing thrust. This force would originate from vacuum. The symmetry of interaction between vacuum and the physical part of the cosmos (the information embedded in space) would have been broken. Time components would have been asymmetrically distributed around the sphere. The direction of this force would point to the region of diminished ΔΤ. This force would persist if field asymmetry were sustained. The same mechanism can also be described as anti-gravity if motion is not the objective.

 

 

Retardation

It is redshift that holds the key to verifying retardation. Time-flow rate is invisible and so it cannot be observed directly. It must be mathematically inferred. If distorted images of distant galaxies were mathematically reconstructed by use of time variables (instead of space or energy variables), then perhaps galaxies would have to rotate faster in the distant past. Every periodic event would have to be accelerated. Cosmic constants would become time dependent.

 

 

Cosmic processing

This is a modified double slit experiment. Introducing asymmetry in the environment of the slits, for example a thin transparent film over one of the two slits, or some temperature gradient, or any other impurity in the conditions between them, this should result in an asymmetric spread (say 70/30 instead of the typical 50/50), if the path of photons is detected. Collecting those photons and redirecting them by some optical channel back through the slits (delayed by the use of mirrors if necessary), but this time in perfectly symmetric conditions between them, it would be interesting to investigate if there was any memory of that initial asymmetry. In traditional thinking there should not be any memory and the spread should instantly get back to 50/50. In the case of any different observation, even extremely briefly defining a transition, this experiment would have provided clear evidence of cosmic processing.

 

 


Licensed under a  Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .

 

Privacy