Skip to main content

Gravitational Waves: Why detecting them would open sight through new eyes to the Universe around us.

New tantalizing reports of gravitational waves being detected hit the web recently.

So if it does detect them I'd imagine it would detect distal waves with high periodicity rates. The proximal waves are going to have very long wave length and I don't know how they'd disambiguate those without making very long observation windows.

Proximal sources of such waves are:

a)  the Sun itself ...very very tiny micro shedding as it gives up mass to energy. These are likely to be super super weak and likely not capturable by current generation technology.

b) the Sun - Mercury transit , though Mercury is tiny compared to the sun both do distort space time and sit in mutual wells...which should create a very tiny wake (a GW) that has periodicity matched to the rotation rate of Mercury around the Sun. This being a longer wavelength it would require a long observation window and it is also abysmally tiny. So again ...unlikely to be captured with current tech.

c) the Sun - Venus transit , though Venus is a bit bigger than Mercury it is still small and any emerged wake will be super tiny....however...the combination of the Venus - Sun wake and the Mercury - Sun wake may realize very distinctive interference fluctuations that will clearly identify the signal as the local distortion signature for those planets - sun system. I am still not sure a first generation detector is able to detect these.

So what will likely be first detected IMO?

We'll detect large amplitude and high frequency waves generated from "nearby" binary stars.

even better if they are neutron stars ...

even better still if they are rotating black holes or black hole / star systems...

 We should also detect a GW noise or background from the vicinity of the core of the galaxy toward Sagittarius A . This may be irregular and hard to pin down as the gravitational dynamics in the core are complex but they should shed off a roughly gaussian background signal.

The cool thing about all of this is that knowing the mass attributes of nearby systems of these various types we should be able to precisely calculate how big the GW's they shed off will be, what their periodicity will be. So we'll be able to check reality off of the equations (or rather reality would be the check OF the equations prediction).

It would REALLY be interesting if the equations make predictions that are off by some consistent degree...indicating GR would need some kind of correction. I don't anticipate that...I think when we do find them ...we are going to find exactly what the equations predict. Though all the other major predictions of GR's capability have been performed and tested to high degree GW's stand apart as one of the most important ones should it be revealed to be real.

The main reason is highlighted by my explanation above. We will be able to pick up new signals from far away systems that can help unravel various mysteries about those systems that our optical, radio , micro wave and infrared observations can't tell us.

For example having GR detection will allow us to refine a host of calculations regarding the attributes of remote systems. Like their mass shedding rates. We'll also have a new tool (in addition to the very useful transit, transit spectral[doppler] and transit wobble methods) to help identify star systems with planets by measuring the gravitational wave beat patterns that emanate from remote systems....though I think there will present a problem of filtering all the waves that are coming from every where....disambiguating the universes GW background from the foreground will be necessary.

At least this is what my intuition tells me based on what I know about General Relativity. Any GR experts feel free to correct or elaborate on anything I wrote above that doesn't make sense. ;)


(Mass shedding)
(Ways to discover exoplanets)


Popular posts from this blog

Highly targeted Cpg vaccine immunotherapy for a range of cancer


This will surely go down as a seminal advance in cancer therapy. It reads like magic:

So this new approach looks for the specific proteins that are associated with a given tumors resistance to attack by the body's T cells, it then adjusts those T cells to be hyper sensitive to the specific oncogenic proteins targeted. These cells become essentially The Terminator​ T cells in the specific tumor AND have the multiplied effect of traveling along the immune pathway of spreading that the cancer many have metastasized. This is huge squared because it means you can essentially use targeting one tumor to identify and eliminate distal tumors that you many not even realize exist.

This allows the therapy for treating cancer to, for the first time; end the "wack a mole" problem that has frustrated traditional shot gun methods of treatment involving radiation and chemotherapy ...which by their nature unfortunately damage parts of the body that are not cancer laden but …

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

AgilEntity Architecture: Action Oriented Workflow

Permissions, fine grained versus management headache
The usual method for determining which users can perform a given function on a given object in a managed system, employs providing those Users with specific access rights via the use of permissions. Often these permissions are also able to be granted to collections called Groups, to which Users are added. The combination of Permissions and Groups provides the ability to provide as atomic a dissemination of rights across the User space as possible. However, this granularity comes at the price of reduced efficiency for managing the created permissions and more importantly the Groups that collect Users designated to perform sets of actions. Essentially the Groups serve as access control lists in many systems, which for the variable and often changing environment of business applications means a need to constantly update the ACL’s (groups) in order to add or remove individuals based on their ability to perform certain actions. Also, the…