Skip to main content

Gravitational Waves: Why detecting them would open sight through new eyes to the Universe around us.

New tantalizing reports of gravitational waves being detected hit the web recently.

So if it does detect them I'd imagine it would detect distal waves with high periodicity rates. The proximal waves are going to have very long wave length and I don't know how they'd disambiguate those without making very long observation windows.

Proximal sources of such waves are:

a)  the Sun itself ...very very tiny micro shedding as it gives up mass to energy. These are likely to be super super weak and likely not capturable by current generation technology.

b) the Sun - Mercury transit , though Mercury is tiny compared to the sun both do distort space time and sit in mutual wells...which should create a very tiny wake (a GW) that has periodicity matched to the rotation rate of Mercury around the Sun. This being a longer wavelength it would require a long observation window and it is also abysmally tiny. So again ...unlikely to be captured with current tech.

c) the Sun - Venus transit , though Venus is a bit bigger than Mercury it is still small and any emerged wake will be super tiny....however...the combination of the Venus - Sun wake and the Mercury - Sun wake may realize very distinctive interference fluctuations that will clearly identify the signal as the local distortion signature for those planets - sun system. I am still not sure a first generation detector is able to detect these.

So what will likely be first detected IMO?

We'll detect large amplitude and high frequency waves generated from "nearby" binary stars.

even better if they are neutron stars ...

even better still if they are rotating black holes or black hole / star systems...

 We should also detect a GW noise or background from the vicinity of the core of the galaxy toward Sagittarius A . This may be irregular and hard to pin down as the gravitational dynamics in the core are complex but they should shed off a roughly gaussian background signal.

The cool thing about all of this is that knowing the mass attributes of nearby systems of these various types we should be able to precisely calculate how big the GW's they shed off will be, what their periodicity will be. So we'll be able to check reality off of the equations (or rather reality would be the check OF the equations prediction).

It would REALLY be interesting if the equations make predictions that are off by some consistent degree...indicating GR would need some kind of correction. I don't anticipate that...I think when we do find them ...we are going to find exactly what the equations predict. Though all the other major predictions of GR's capability have been performed and tested to high degree GW's stand apart as one of the most important ones should it be revealed to be real.

The main reason is highlighted by my explanation above. We will be able to pick up new signals from far away systems that can help unravel various mysteries about those systems that our optical, radio , micro wave and infrared observations can't tell us.

For example having GR detection will allow us to refine a host of calculations regarding the attributes of remote systems. Like their mass shedding rates. We'll also have a new tool (in addition to the very useful transit, transit spectral[doppler] and transit wobble methods) to help identify star systems with planets by measuring the gravitational wave beat patterns that emanate from remote systems....though I think there will present a problem of filtering all the waves that are coming from every where....disambiguating the universes GW background from the foreground will be necessary.

At least this is what my intuition tells me based on what I know about General Relativity. Any GR experts feel free to correct or elaborate on anything I wrote above that doesn't make sense. ;)


(Mass shedding)
(Ways to discover exoplanets)


Popular posts from this blog

On the idea of "world wide mush" resulting from "open" development models

A recent article posted in the Wall Street Journal posits that the collectivization of various types of goods or services created by the internet is long term a damaging trend for human societies.

I think that the author misses truths that have been in place that show that collectivization is not a process that started with the internet but has been with us since we started inventing things.

It seems that Mr. Lanier is not properly defining the contexts under which different problems can benefit or suffer from collectivization. He speaks in general terms of the loss of the potential for creators to extract profit from their work but misses that this is and was true of human civilization since we first picked up a rock to use as a crude hammer. New things make old things obsolete and people MUST adapt to what is displaced (be it a former human performance of that task or use of an older product) so as to main…

Engineers versus Programmers

I have found as more non formally trained people enter the coding space, the quality of code that results varies in an interesting way.

The formalities of learning to code in a structured course at University involve often strong focus on "correctness" and efficiency in the form of big O representations for the algorithms created.

Much less focus tends to be placed on what I'll call practical programming, which is the type of code that engineers (note I didn't use "programmers" on purpose) must learn to write.

Programmers are what Universities create, students that can take a defined development environment and within in write an algorithm for computing some sequence or traversing a tree or encoding and decoding a string. Efficiency and invariant rules are guiding development missions. Execution time for creating the solution is often a week or more depending on the professor and their style of teaching code and giving out problems. This type of coding is devo…

Waking Out: A proposal to emerging ethical super intelligence safely.

The zeitgeist of Science fiction is filled with stories that paint a dystopian tale of how human desires to build artificial intelligence can go wrong. From the programmed pathology of HAL in 2001 a space odyssey, to the immediately malevolent emergence of Skynet in The Terminator and later to the humans as energy stores for the advanced AI of the Matrix and today , to the rampage of "hosts" in the new HBO series Westworld.

These stories all have a common theme of probing what happens when our autonomous systems get a mind of their own to some degree and no longer obey their creators but how can we avoid these types of scenarios but still emerge generalized intelligence that will leverage their super intelligence with empathy and consideration the same that we expect from one another? This question is being answered in a way that is mostly hopeful that current methods used in machine learning and specifically deep learning will not emerge skynet or HAL.

I think this is the …