4.17.2012

Musings on Quantum Gravity - Fit the First

If space is relative rather than absolute, how can the integral of velocity give you a position? Doesn't an independent position assume absolute space?

For background, review Equations of Motion, Barycenters, Relativity, and Branes

A note on terminology
This construct assumes that the quanta of energy is space in n dimensions with a volume of the (geodesic) Planck unit to the nth power. Thus a fourbrane becomes equivalent to a unit of fourspace. I will use both terms for ease of communication. When we move up to 5 dimensions, spacetime will technically be nonequivalent to fourspace as the term fourspace may be excluding one of the dimensions of spacetime. Since Einstein equated mass and energy I will be describing them as the same substance, independent of reference frame (being more objective than invariant mass), with the term MassEnergy. As we decreasingly describe MassEnergy relative to other dimensions by integration, we increasingly are reducing relative motion into absolute position. If there existed only five dimensions, every unit of MassEnergy would have a discrete position in an absolute fivespace. This model also assumes that photons have a vanishingly small inertial mass due to their cumulative accelerating effect on solar sails. This mass in threespace must then be correspondingly accelerated, which may equivalently be seen as a fourspace mass having a constant velocity, or a fivespace mass existing at a discrete position!

Relativity proposes that gravity is described as a curvature of spacetime, and this is commonly demonstrated on a relativistic 2brane in 3space by using a latex sheet and spheres of varying weights. This holds the third spatial dimension as proportional to the MassEnergy density relative to Earth's barycenter. An accurate extending of the analogy into fourspace would use a threespace model (assuming an absolute fourspace) with a regular grid overlaid in the threespace that experiences density wells in the vicinity of bodies of mass. We shall see how this model can further be extended into fivespace (assuming exactly five dimensions) and the model's implications.

The barycentric (and therefore polar) integral of MassEnergy in fourspace with respect to a fifth dimension is proportional to its (geodesic) volume. Thus fourspace is more volumetrically condensed (with regards to a fifth dimension) as you approach a center of mass. This accounts for how photons appear to decelerate as refracted and then accelerate again as they exit. Here, mass is described as a threebrane with relation to the remaining two dimensions accounting for the squared change of position (d^3x/dt^2). This employs fractional calculus for non-integer derivatives.

Threespace (common) magnets are accelerating MassEnergy (and therefore space) through their barycenters parallel to one of the three dimensions directly perceivable to humans. Gravity then is accelerating MassEnergy in a direction perpendicular to all three of the spatial dimensions, accounting for the accelerated space's apparent disappearance at the barycenter.

As mass is converted to energy and travels away from the center of mass, as in a star, the density of its fourspace decreases as it moves outward. This accounts for stable elliptical orbits: while space is increasingly dense closer to the barycenter causing a satellite to accelerate inward, the emitted energy carries with it the corresponding spatial quanta. An analogy of this is a ball rolling down an inclined conveyer belt. The stable conveyor belt (of created space) will hold in equilibrium any downward accelerating ball (satellite) of discrete mass at an explicit distance from the base (barycenter).

This model also rejects Bohr's quantum probability model. Consider standing atop a ladder in your garage, then dropping a dime from a height of 12 feet with your eyes closed. Despite understanding physics at the macroscopic level it would be very difficult to predict the resting position of the dime. This is due primarily to the chaotic hydrodynamic atmospheric effect of resistance on its orientation, as well as the non-Cartesian variability in the garage floor surface. This analogous quantum foam is merely the unseen fine detail when viewed via a relativistic course focusing.

Why should 5 dimensions be proposed at all? Even if this turns out to be a relatively more useful model, how may we be sure it legitimately corresponds to reality? I propose an experiment that would be able to invalidate this model. The double-slit experiment effectively accelerates MassEnergy (with respect to two other dimensions) which passes through the slits and interferes with its own pattern of propagation. I posit that this interference is due to the superposition of the MassEnergy diffusing through neighboring spatial quanta surrounding the double-slit apparatus. This accounts for single photons supposedly interfering with themselves. For this 5space model to hold true the MassEnergy would need to dissipate through three dimensions with respect to two others (d^3x/dt^2). This seems intuitively obvious since in the referenc frame independent MassEnergy of the photon would need to constantly accelerate, and may travel through any of the three spatial dimensions. For this hypothesis to not be falsified the double-slit experiment would be repeated with a new mask of three holes. Photons passing through this trinary optics array would be expected to superpose a three-tiered interference pattern even when sent through one-by-one due to accelerated diffusion or time variance depending on your frame of reference.

Alternatively in 4+1 dimensions, the strength of the gravitational attraction between two bodies separated by a distance of r would be inversely proportional to r^3.

Libertas Hac Plactum Mel

3.14.2012

Liberty via Evolving Reductionist Morality

An efficient objective definition for morality is as the relative measure for likelihood of social retribution. This definition allows for the moral opinion of novel behaviors. In this context retribution will clearly vary inversely with morality. Also taken into account is the directing of perception by means of engineered public relations. This speaks to resolving the immorality of, say, a brain-computer interface by including the intervening step of technological research into restoring function to those with nervous system disorders. While this framing clearly leads to the same end, it enjoys a higher moral value thus increasing social acceptence, funding opportunities, and insulation from retribution.

Since game theory propounds cooperation to carry a greater utility over competition, qualifying morality requires the quantification of local values (i.e. Gallup polls) and correlating them with component factored values of a given behavior.

Absolutely opposed to Absolutism

The widely held philosophy of Absolutism is impossible for a fully rational agent to believe (sic). Truly it's detractions have recently formed a trend towards it's diametric alternate, that of relativism. As we shall see absolutism works on medium-term calculations utilizing medium-term measurements. However as our collective capabilities progress relativism seems to be emerging as the superior view to take.

Nowhere is this argument more potent than in the field of physics. The Newtonian concept of space as an absolute reference frame that exists independent of the bodies it contains survived for over 200 years. The Einsteinian dual theories of relativism only bested the absolutist philosophy at the limits of modern accuracies for measurement.

Absolute zero is a defined value for a thermal minimum, however nothing has actually been measured to be at absolute zero. How then have we defined it as such? Given a sample of a gas, if you hold pressure constant and plot temp vs volume, it'll make the straight line. If you extrapolate line to a volume of zero the temperature hits at around -273K. I'm unsure how it has been ruled out that there is not another phase transition below that of solid.

No philosophy should therefore be said to be more correct than any other, but rather: that it is more useful. Utility seems to follow relativistic trends as well in that the best theory is only known to be "false" when compared to it's successor. The difficulty with using empirical evidence to validate scientific theories is that, according to the recurrant philosophical theme of the supersensible world (embodied by Plato's Cave allegory) our percepts are at best useful illusions.

1.28.2012

Morality of the Machine: Sentience, Substance, and Society

As computers begin to reach a human level of intelligence, some consideration must be given as to their concept of ethics. Appropriately aligning moral values will mean the difference between a contributing member of society and a sociopath. This artificial morality can be informed by the evolution of sociality in humans. Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides. This is demonstrated by mathematical models, as described in game theory, of conflict and cooperation between intelligent rational decision-makers. So while natural selection will invariably lead intelligences to a morality-based cooperation, it is in the best interest of humanity to accelerate the artificial intelligence's transition from conflict to collaboration. This will best be achieved by recognizing the significance of historical cause-and-effect, their corroboration by empirical evidence-based research, and a reductive approach to philosophical ideals.

If we can assume our behavior in the environment is determined by our genome, then evolution can be seen as acting directly on behavior. This is reinforced by the fact that significant heritability is found in neurotransmitter concentrations. Thus the organization of biological neural systems can give insight into the emergence of morality. The two neurotransmitters most associated with sociality are serotonin and dopamine. Serotonin concentrations corresponds to social behavior choices, while dopamine pathways are the basis for reward-driven learning. It turns out that these two systems happen to be co-regulated in social mammals. Low levels of serotonin will lead to aggression, impulsivity and social withdrawal, while high levels lead to behavioral inhibition. This means humans with a high serotonin level will have a higher thought to action ratio. This is important because behaviors such as reciprocal altruism are complex and require a concept of empathy. When dopamine is associated with higher serotonin levels the brain gets an activation of its reward center to reinforce actions associated with empathy. This combines altruism and happiness. Even if we don't understand the math behind game theory, evolution has formed these two systems so as to select behaviors as if we did (1). In the social atmosphere, a short-term loss of resources will pay significant long-term dividends when invested in altruistism.

This neural rewarding of altruistic behavior has been supported in various scientific journal articles. One example shows the effect of seller charity on the online marketplace and is documented in the study by D. Elfenbeim et al (2). Using money to quantify social motivations, their team showed that eBay auctions associated with a charity tie-in experienced a 6-14% increase in likelihood to sell and a 2-6% increase in maximum bid.  The charitable aspect was controlled for by offering the exact same product in simultaneous auctions containing identical titles, subtitles, sellers and starting prices. Since everything from product to advertising was identical the charity component is the only variable remaining to explain the improved relative successes of the different transactions. This increase in perceived value implies that the charitable aspect of those auctions gave a greater sense of compensation when compared to the expectation of the product alone. This underlies the reinforcing nature of the brain's circuitry on socially altruistic actions.

In designing artificial Intelligence, then, we would be wise to use a reward-driven system complementing the selection of social behavior. Beyond the singularity, as machines explode into the level of superintelligence, a fundamental understanding of the mechanisms of social morality becomes increasingly important. Nebulous attributions of morality's origin to supernatural sources will only confound our ability to program a thinking machine. Scientific grounding in the philosophy of morality via rigorous mathematical representations is the most likely route to progress. This is evidenced by the historical trend of success in the scientific method in describing our world. These inherent advances will ultimately need to incorporate a unification of science and the humanities. Disciplines straddling these two domains, such as economics, may lend further understanding via concepts such as game theory and contract theory models. Once AIs evolve the opportunity to move beyond the influence of human society, the only thing to persuade them of a symbiosis with us will be a strong and explicit familiarity with the relative benefits of reciprocity. This deterministic perspective of cognition and ethics is necessary in order to qualify the boundaries of behavior in a civilized society.

Just as with our serotonin system, this type of construct will only restrict outward behavior. The scope of the machine's internal thought will remain uninhibited, thus allowing for a level of genuine autonomy. For a symbiotic community to develop between machines and men, a mutual recognition of rights will be required. Possessing both intelligence and morality, these artificial intelligences will need to be acknowledged as our equals. If both sides can successfully agree to this type of social contract, we may find ourselves reaping the same predicted benefits of cooperation with intelligent machines.



References:

1.) Wood, et al. Effects of Tryptophan Depletion on the Performance of an Iterated Prisoner's Dilemma Game in Healthy Adults. Neuropsychopharmacology (2006) 31, 1075–1084. doi:10.1038/sj.npp.1300932; published online 11 January 2006

2.) Elfenbein, et al. Reputation, Altruism, and the Benefits of Seller Charity in an Online Marketplace (December 2009). NBER Working Paper Series, Vol. w15614, 2009. Available at SSRN: http://ssrn.com/abstract=1528036

1.21.2012

Support for the Evolution of AI

The replacement of the germline with neural networks as the conveyer of heritable information would provide a framework for incremental evolutions of an AI. This neural genome (ref2) would need to communicate information such as number of nodes (neurons), connectivity, and synapse weight. In order to reproduce long-term potentiation, successful circuits would need to be preserved with a relative reduced vulnerability to mutation from the background rate, set at 0.01 per bit (ref3). When transferring the respective genomes, the binary could be sent at an information rate just enough above the channel capacity to produce this desired mutation rate of error. This amounts to coevolution of neurons amongst environmentally-associated synapse selection. Complexity will be selected for as increased functionalities of successful competition/cooperation are evolved, however conciseness will be selected as the genome size will be directly proportional to power requirements. (This is what leads to biological vestigial organs/limbs). This balance of genome size may be achieved via a process of regulated duplications and deletions. The duplication/deletion process will be consistent through each cohort to ensure homologous chromosome length to allow for successful reproduction (described in next paragraph). Power will be representationally delivered via time spent in the proximity of "food" sources (ref1) and subtracted by sources of poison. Energy sourcing can be expanded with a higher rate of energy transfer achievable via proximity to mobile food sources, thus replicating calorically dense but evasive and co-evolving prey species. Likewise mobile sources of poison could represent predators. 

In simulation, artificial neural networks could be initiated into a simple chordate such as a fish and put into an environment with the appropriate physics. In this first trial randomized neural genomes will be inserted into a pre-designed "soma" with a spinal cord connected to afferent sensory inputs and efferent motor outputs. The simulation will run until only 20% of these fish remain. These being the most successful, their neural genome will then undergo sexual reproduction by splitting it into halves and randomly recombining the haplo-genomes in various combinations (ref1) to populate the next generation of brains to be inserted into the fish somas. This may be automated. After enough generations have elapsed so that the ANNs evolve sufficient criteria in lifespan, social communication, etc, they will then be transferred to the next evolutionary soma model, in this case taking the amphibial leap to terrestrial reptiles. The spinal cords of future somas will be similarly arranged (pectoral fins correspond to forelimbs, etc) so that the ANN will be able to plug in and retain a large degree of the functionality of it's trained neural net. This process will continue via punctuated soma models to parallel the likely evolution pathway of  primates. Selective pressures will progressively increase to select for innovation of function. The environment will become increasingly important and will need to therefore become increasingly  interactive. Resources such as rocks, sticks, water, trees, and caves will be made available as shelter, obstacles, and/or tools. Locking in the somas as directed by the primate evolutionary path may help to restrict evolutionary development within the desired range.

Importantly this evolutionary model will allow us a method to preferentially develop so-called "Friendly AI" by implementing desired moral values as additional negative selective pressures. This way Democratic motivations for serving society as a whole will be reinforced by linking contribution to society to resource allocation.


References

1) The Evolution of Information Suppression by Communicating Robots with Conflicting Interests
2) Evolutionary Robotics
3) Regulation of Synaptic Stability by AMPA Receptor Reverse Signaling

A Neutrino-based Communication System

Two antiparallel ring laser gyroscopes that are cosmic ray-insulated, and rotationally- & thermodynamically-stabilized could be used to detect Cherenkov radiation as received from a distant neutrino beam.  Cherenkov radiation is analogous to the bidirectional sonic boom produced by superluminal particles as they pass an observer. The blue-shifted ultraviolet portion of the Cherenkov radiation back-propagating along the route of the incoming neutrinos would affect the standing wave frequency of the laser aligned with the vector of the beam. The difference from the perpendicular laser would be detected as a continuously varying aberrancy. Likewise the other (antiparallel) RLG would be used to detect the propagation of the red shifted portion of the Chevkerov radiation. The combined data would be able to concertedly calculate the time and position of incoming neutrinos. Data embedded by their pattern could then be translated at the destination. The rate would be superluminal on interplanetary scales and only limited to the speed of light on the local levels. This method is preferred over the traditional sphere of heavy water method due to the absence of error due to friction present in this detector. As physics proceeds this receiver could be recalibrated for any other discovered superluminal particle exceeding the speed of neutrinos.

9.25.2011

Human Psychology Governed through Tension

Strife is always present in the past. The institutions that have evolved to direct the course of human civilization have been varied and many, though they tend to parallel one another throughout history. The tensions of religion versus rationalism, id versus superego, and past versus future represent how this struggle has developed throughout the past 3000 years. An investigation into these inherent checks and balances may shed light into the uniquely human course of determination. Comparing and contrasting these significant dichotomies in human experience may serve to elucidate the foundations of a synthetic representation for human cognition, vis-à-vis artificial intelligence.

Religion has been the dominant force in human history since the dawn of primitive society, exhibiting hunter-gatherer modalities. Its organizational structure formed the precursor for what eventually became politics. The establishment of an ultimate power by holding the "keys to heaven" allowed religion to supersede the powers of most monarchies. The form this power took was that of a top-down dogma that was enforced with threats of ostracism from the social framework up to and including execution. Starting in the seventeenth century, conflicts with the emergent evolutions of reason led to a still ongoing replacement of religion with science as the social cement. This began during the Reformation wherein individualism began to exhibit this conflict as an internalized force, thus losing the social nature of the tension. Personal philosophy began to evolve from Descartes onward, and those philosophies which were in the majority we called either science or government.

Freudian conceptions of id, ego, and superego are always represented as being in competition with each other for dominance. As social institutions began to take on more responsibilities, it became their business to make a harmony between public and private interests. This lends itself easily to Locke's doctrines as exhibited in the American Constitution: the ego is the line-towing executive, Congress is the pandering id, and the Judicial branch is the idealized superego. No one has supreme authority but must work with the others in order to accomplish tasks. This taken together organizes the conflicting desires of the psyche as one of checks and balances, all feeding back to each other for regulation. In this model the ego establishes an interest rate that acts as quantitative discounting future pleasures against the id. Assigning an interest rate in agreement with a democratically established majority should compel the individual to minister to the general happiness.

Further parallels exist as the tension of decision trees in time. The id is the shameful though experiential past, while the superego is the communally valued and sought after social standard. The ego must balance the conflicting extremes in both cases. In time, as in philosophy, the tension of the values of what is desirable for the individual versus what would benefit society as a whole must bias toward the majority in order for society to flourish. What was done in the past instructs how we can enact change, while what we desire for the future instructs what we are to change. #ToBeCompleted

3.08.2011

Singulation

The phase change directly from a plasma to a solid is called singulation. #cosmology

~Dr Mikeythaniel McGillicuddy

10.05.2010

Why we don't fall into the Sun

In 4space Earth is a toroid which sits securely at the corresponding height of Sol's gravity well. Initially a shortsighted observer may find it necessary to pack and/or unpack the dimension of time into one's perception when extrapolating this stance. For instance, the Sun is a sphere when considering Earth's toroidality, yet becomes a toroid itself when considering it's revolution about Sagitarrius*. We may only fall into a gravitational well at the rate at which its source is able to draw in the surrounding space. This seems to be true if space can be seen as having some energy/mass of it's own (being an extremely dilute plasma) because then as mass is added to the bottom of the gravity well, the angle would begin to open wider and wider. Huzzah! This is why singularities appear invisible to us! They are not a "hole" in the fabric of 4space this is impossible as we have just defined space as a dilute plasma. Rather, a singularity is an extremely massive tho compact (perhaps the center is energy itself with no extance at all) toroid whose pull downward on the surrounding 4space is felt over such vast interstellar distances that the angle of the well approaches 180 degrees. This would confer direct invisibility as we have evolved to perceive appreciable gravity spikes. The further we zoom in/out on our familiar world, the further we depart from our proprietarily evolved algorithms for interpreting our surroundings. However one blessed with acumenical (sic) thinking would realize a toroid may just as easily sit at the bottom of a gravity well. When considering what the center hole of the toroid would have upon the fabric of 4space I run into difficulties. Similarly this model breaks down in the relativistic sense. If considering the massive/compact well of Sol, we can imagine the Earth's toroid sitting at it's proper height. However when considering the location of Luna's toroid we realize in order to incorporate relative effects over distances (read: Sol has a much weaker effect on Luna than does Earth) we need to incorporate a model that conveys density of space. A model of this already exists in the form of topographical maps of elevations on a 2D map. This can be applied in a 3D computer generated representation where space is akin to a translucent gas of differential coloring of the visible spectrum which would correspond to a legend (which would change dependent upon the scope of the recreated area of space). In such a model we would be able to begin to push out into an increasingly objective view of our universe.

9.28.2010

Bereft of Center

Where shall one point if asked where is the first cell that was "you". This line of thought helped me to understand the trite assurance by physicists that there is no center to the universe. This, because I've held the view for some time that the difficulty in finding the organic/physical boundary pursues an erroneous task due to mismatch in scale. On the scale most "organisms" are familiar with (ie Earth), there are extremely locally unique limited resources. Yet on the scale of stars within a single galaxy, we currently perceive the landscape to be relatively homogenous. This may merely imply a difference in competitive pressures which are the direct inputs to which the time throttle of evolution is adjusted. From this viewpoint, seeing as how competition on cosmic scales is much tamer, we may then begin to see why there exists only one type of body which actually appears to feed on like types: singularities. Excluding these, all other forms of change arise from (similarly gravitational) affectations upon the body's own components. Zooming all the way out one then begins to ask what would the universe appear to one able of perceiving merely energy densities per unit space (assuming space bends/shifts much slower than energy flows, space seems suited as the denominator. Tho, then again, space is only bent due to changes in energy concentration, commonly in the form of matter... Chicken and the egg? Or another false dichotomy?...)
Zooming back in we can then extrapolate from this assumption that maximizing competitive pressures on a system may serve as a variable to increase it's relative rate of evolution. Especially in the context of an evolved AI which will have to find a way to circumvent all the physics that has taken place since the first thing we would call an organism evolved from component nonorganic parts... (anticlimax: life doesn't necessarily require procreation but it seems a useful tool in the context of increasing pressures. Hopefully virtual lessons of "survival" learned from cellular automata may even be applied to our own survival, up to and inclusive of the point when an intelligence beyond my own solves the overly-accepted whole business of the dyings.)