IPB

Welcome Guest ( Log In | Register )

2 Pages V   1 2 >  
Reply to this topicStart new topic
Distinctive benefits of 24bit recording?, for digital sequencing
HTS
post Jun 1 2011, 09:21
Post #1





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



http://www.hydrogenaudio.org/forums/index....st=#entry583387

1. The above thread here says something like we can convert 16bit files into 24bit files. Can I get an explanation of how this works? Magnifying a low pixel number image to a large size isn't the same thing as having a high resolution image. And upsampling a 44.1khz recording to 192khz is of the same token. Is the bit depth of a different category?

2. What are "quasi-linear" and "nonlinear" processes? Are only the synthetic stuff nonlinear? Or can natural processing like convolution reverb come off as nonlinear as well?

3. I read somehwere that microphones can only do 18 bit at most, that's why the CDs released back in the 1990s use and advertise 20bit recording. So basically they are saying recording at 24bit is a waste of storage space. Is this true? Many of the mics used to record the expensive sample libraries are not built with the newest technology, in fact many of them appear to be vintage mics that cost tens of thousands of dollars. Can these devices designed back in the 1940s take advantage of a 24bit recorder?

Thanks.
Go to the top of the page
+Quote Post
AndyH-ha
post Jun 1 2011, 10:41
Post #2





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

"microphones can only do 18 bit at most" is nonsense. The bit depth is simply the number of amplitude levels from nothing up to maximum signal level. All recordings made with 24 bit converters have nothing but noise in the lower order bits. Where the signal sets in the 24 bit range is a matter of adjusting the analogue chain amplification.

The best ADCs can capture about 20 bits before internal noise pretty much drowns out the signal. Very good quality in the preceding analogue chain, as well as exceptional recording conditions, are necessary to actually approach 20 bits of measurable signal in the recording..
Go to the top of the page
+Quote Post
HTS
post Jun 1 2011, 22:04
Post #3





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



QUOTE (AndyH-ha @ Jun 1 2011, 05:41) *
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

I meant to be used for digital sequencing, not just album mastering. The sequencers/players are going to mix and add effects (reverb etc...) to the instrument samples. A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?
Go to the top of the page
+Quote Post
saratoga
post Jun 1 2011, 23:03
Post #4





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (HTS @ Jun 1 2011, 17:04) *
QUOTE (AndyH-ha @ Jun 1 2011, 05:41) *
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms. Actually, most such processing is done in floating point, 32 or 64 bit, rather than 24 bit integer, but the principal is the same. As far as finished product goes, however, few releases unitize close to 16 bits and the available evidence is that there are no recordings where anything more that 16 bits is audible.

I meant to be used for digital sequencing, not just album mastering. The sequencers/players are going to mix and add effects (reverb etc...) to the instrument samples. A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?


Assuming the material is very well recorded, having 24 bit inputs gives you higher dynamic range, at least in theory. In practice I doubt most of your 24 bit material actually has > 100dB dynamic range. So I think the short answer is something like "there is not much of an advantage to be negated".
Go to the top of the page
+Quote Post
Axon
post Jun 1 2011, 23:35
Post #5





Group: Members (Donating)
Posts: 1984
Joined: 4-January 04
From: Austin, TX
Member No.: 10933



Linearity is a very specific mathematical concept in signal processing/electrical engineering. But in this specific context, I think the most pertinent points about them are 1) a "nonlinear" operation on an audio signal synthesizes new frequency components which may not have previously existed before; and 2) For the vast majority of nonlinear operations you will encounter -- static distortion, clipping, crossover distortion etc -- the new frequency components will occur at the sum of existing frequency components. These new frequencies can exist above Fs/2, and if they do, they will be aliased.

So upsampling is commonly performed before these "nonlinear" operations, to ensure that any distortion products are not aliased.

Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

This post has been edited by Axon: Jun 1 2011, 23:36
Go to the top of the page
+Quote Post
DVDdoug
post Jun 2 2011, 00:00
Post #6





Group: Members
Posts: 2448
Joined: 24-August 07
From: Silicon Valley
Member No.: 46454



QUOTE
A lot of sample libraries that I use are recorded at 44.1/16, while some are recorded at resolutions as high as 192/24bits. when we use 32bit processing, does it negate the advantage of 24bit inputs?
No. If there is any "advantage", 32-bit floating-point is higher resolution and it doesn't hurt.

There are "mathematical" reasons for using floating-point... with DSP you can be working with very-large and very-small numbers, even though the "answer" (sample values) remains within a 16 or 24 bit integer range. (And, sometimes the "answer" goes out of the integer range too... Floating-point gives you a chance to bring the levels back to normal before saving.)

If you're working with integers, and you cut the volume of a 24-bit signal in half, now you've "only" got 23 bits worth of information. With integers, you can re-boost the signal level and you still have 23 bits of resolution. With 32-bit floating point, you've essentially got infinite dynamic range and you don't have this issue. (You can mix & adjust levels without loosing resolution.)

Or, you could reduce the level of a 16-bit sample, and if you save it in 24 or 32-bits you won't loose resolution. (But, unless you are re-boosting the signal later it doesn't matter! i.e. You can reduce the signal level to the point where you have only 4 bits left, and the signal & noise will be so low that you won't hear the lack of resolution. It's only if you re-boost the signal that you might hear the loss of resolution/quality.)

You can mix a 24-bit & 16-bit file together, and the 24-bit part will retain it's resolution. (Except after mixing, which is done by addition, you normally end-up reducing the level which means you loose the low-level bits anyway.


If you add reverb to a 16-bit signal, you can end-up with more than 16-bits of data (i.e. as the reverb fades-out). Even a simple fade-out or volume change can "take advantage" of additional bits that didn't exist in the original. (But, you'll never hear it. wink.gif )

This post has been edited by DVDdoug: Jun 2 2011, 00:07
Go to the top of the page
+Quote Post
HTS
post Jun 2 2011, 07:07
Post #7





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



QUOTE (saratoga @ Jun 1 2011, 18:03) *
Assuming the material is very well recorded, having 24 bit inputs gives you higher dynamic range, at least in theory. In practice I doubt most of your 24 bit material actually has > 100dB dynamic range. So I think the short answer is something like "there is not much of an advantage to be negated".

For which types of music will the 24bit start showing its advantage? Beethoven loud?

QUOTE
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

Samplers like Halion have "nonlinear" reverbs, I guess those don't fall into the general category? Are all convolution based reverbs linear because they are recorded naturally?


QUOTE
Or, you could reduce the level of a 16-bit sample, and if you save it in 24 or 32-bits you won't loose resolution. (But, unless you are re-boosting the signal later it doesn't matter! i.e. You can reduce the signal level to the point where you have only 4 bits left, and the signal & noise will be so low that you won't hear the lack of resolution. It's only if you re-boost the signal that you might hear the loss of resolution/quality.)

What is the difference between stuff recorded at 16bits then padded into 24 or 32 bits, and stuff that was registered with 24bit ADCs?
Go to the top of the page
+Quote Post
Northpack
post Jun 2 2011, 10:35
Post #8





Group: Members
Posts: 455
Joined: 16-December 01
Member No.: 664



QUOTE (HTS @ Jun 2 2011, 06:07) *
For which types of music will the 24bit start showing its advantage? Beethoven loud?

Actually you would have to sit right within the orchestra to possibly experience a dynamic range higher than 16bit=96dB (that's why the musicians wear earplugs wink.gif) The dynamic range experienced from the audience usually won't exceed ~80dB.

For cinema Dolby defined a maximum peak on LFE channel of 115dB. This would actually require 20bit to avoid a noticeable noise floor*. This is for movie sound effects however! Music, even very dynamic classical music like Beethoven, won't have that much dynamics.

*which is quite theoretical since the ambient noise of an auditorium will exeed 20dB by far.

This post has been edited by Northpack: Jun 2 2011, 10:39
Go to the top of the page
+Quote Post
Notat
post Jun 2 2011, 14:17
Post #9





Group: Members
Posts: 581
Joined: 17-August 09
Member No.: 72373



If I understand the question correctly, you're asking if there's a benefit in using a 24 or 32-bit format for production. In almost all cases, there absolutely is. People who tell you that 16 bits is enough may be technically correct but be sure that work requires analysis and careful optimization of your gain structure. Just use the higher resolution and put the energy you save not having to think so hard about all this stuff into the music.

When it comes time to produce your final master, create both 16 and 24-bit versions. There will possibly be additional information in the bottom 8 bits. No one will be able to hear it but some people will be willing to pay for it smile.gif
Go to the top of the page
+Quote Post
saratoga
post Jun 2 2011, 14:40
Post #10





Group: Members
Posts: 4718
Joined: 2-September 02
Member No.: 3264



QUOTE (Notat @ Jun 2 2011, 09:17) *
If I understand the question correctly, you're asking if there's a benefit in using a 24 or 32-bit format for production. In almost all cases, there absolutely is. People who tell you that 16 bits is enough may be technically correct but be sure that work requires analysis and careful optimization of your gain structure.


That would be for recording. You have to be careful to set the gain right when recording, particularly for 16 bit. If you're just mixing, then the gain is already set by whoever recorded it. If they took the time to make 16 bit work, then its no trouble for you.

Go to the top of the page
+Quote Post
pdq
post Jun 2 2011, 14:48
Post #11





Group: Members
Posts: 3311
Joined: 1-September 05
From: SE Pennsylvania
Member No.: 24233



QUOTE (HTS @ Jun 2 2011, 02:07) *
What is the difference between stuff recorded at 16bits then padded into 24 or 32 bits, and stuff that was registered with 24bit ADCs?

The only difference is whether the additional bits contain zero, or they contain noise (or at least mostly noise).
Go to the top of the page
+Quote Post
Ethan Winer
post Jun 2 2011, 17:51
Post #12





Group: Members
Posts: 248
Joined: 12-May 09
From: New Milford, CT
Member No.: 69730



QUOTE (AndyH-ha @ Jun 1 2011, 05:41) *
Converting 16 bit to 24 bit is simply a format change. The resulting lower order eight bits will be zeros. This makes NO change to the audio data. It can be helpful if the audio is to undergo extensive DSP, as in mixing and mastering. The quantization errors will be much smaller, reducing distortion and noise created by the transforms.


To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. So there's no benefit from converting 16-bit files to more bits because the software already does that as it reads the data from disk, before applying processing.

QUOTE (Northpack @ Jun 2 2011, 05:35) *
wink.gif) The dynamic range experienced from the audience usually won't exceed ~80dB.


I'd say 50 to 60 dB is more like it. Maybe a really great hall with exceedingly quiet air conditioning can beat those figures. However, 50 dB s/n in a concert hall is not the same as 50 dB s/n in a cassette deck. Tape noise is more broadband with more HF content. The noise in a hall is more rumble, and is less audible due to Fletcher-Munson.

--Ethan

This post has been edited by Ethan Winer: Jun 2 2011, 17:51


--------------------
I believe in Truth, Justice, and the Scientific Method
Go to the top of the page
+Quote Post
HTS
post Jun 2 2011, 18:25
Post #13





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



QUOTE (Northpack @ Jun 2 2011, 05:35) *
For cinema Dolby defined a maximum peak on LFE channel of 115dB. This would actually require 20bit to avoid a noticeable noise floor*. This is for movie sound effects however! Music, even very dynamic classical music like Beethoven, won't have that much dynamics.

*which is quite theoretical since the ambient noise of an auditorium will exeed 20dB by far.

Is the range the difference between the peak and the ditch? Like for 115db you would need sounds 19db or under to hear a difference?
Go to the top of the page
+Quote Post
DVDdoug
post Jun 2 2011, 21:20
Post #14





Group: Members
Posts: 2448
Joined: 24-August 07
From: Silicon Valley
Member No.: 46454



QUOTE
Is the range the difference between the peak and the ditch?
Right... Dynamic range is the difference between the quietest possible & loudest possible sound. (The quietest sound is usually noise.) Dynamic range usually refers to the equipment limits or file-format limits, not the actual sound/music. Or, you could refer to the dynamic range of an orchestra, comparing the quietest instrument (maybe the triangle or a single plucked string) to a full orchestral crescendo.

Musicians use the term dynamic contrast to refer to the musical content. Most classical music has lots of dynamic contrast, most popular music is "constantly loud" and has very little dynamic contrast. Or we just say, "The music is very dynamic", or "The music has lots of dynamics."

QUOTE
Like for 115db you would need sounds 19db or under to hear a difference?
He's saying that the background noise level in an quiet auditorium is at least 20dB SPL (to give you a rough idea of the dynamic range you might hear in a concert hall). This is over-simplified.... But the idea is, if the loudest sound is 115dB SPL, and the noise level is 20dB SPL, you need a 95dB dynamic range to record the sound/performace. (The sounds below 20dB SPL get "lost in the noise", and any "extra bits" are just recording noise.)



----------------------------------------------------------
Just to get a perspective on the audibility of this stuff, here are a couple of experiments you can try:

- Take one of your files and reduce the volume by 20dB, 40db, 60dB, 80dB, 90dB, and listen to the results (without boosting the gain/volume). Now, think about those teeny-tiny details in the 17th bit that are around -100dB!!!!

- Take one of your 24-bit files and make a 16-bit version of it. Subtract these two files and listen to the difference. Again, listen to the true difference, don't boost the volume. (If your audio editor doesn't have built-in subtraction, invert the polarity of one file and mix... And just to make sure the process is working, try subtracting an exact copy of the file first. If it's working you'll get silence.)

This post has been edited by DVDdoug: Jun 2 2011, 21:21
Go to the top of the page
+Quote Post
AndyH-ha
post Jun 3 2011, 05:06
Post #15





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



QUOTE
To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. So there's no benefit from converting 16-bit files to more bits because the software already does that as it reads the data from disk, before applying processing.
It is not true that all modern editors automatically convert to floating point upon opening a 16 bit file. The intermediate results of individual calculations may well be in floating point, but, when working on a 16 bit file, the data goes back to 16 bit at the calculation's end. This means either you add dither for each or you accept the resulting distortion. Whether the extra dither or the quantization distortion ever make an audible difference is a separate issue.
Go to the top of the page
+Quote Post
HTS
post Jun 3 2011, 08:27
Post #16





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



QUOTE (Axon @ Jun 1 2011, 18:35) *
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

What are the exceptions? The Halion3 sampler does have this "nonlinear reverb" section, but other than that I can assume that the rest all follow the rule?
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Jun 3 2011, 13:52
Post #17





Group: Members
Posts: 3537
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (AndyH-ha @ Jun 1 2011, 05:41) *
The best ADCs can capture about 20 bits before internal noise pretty much drowns out the signal. Very good quality in the preceding analogue chain, as well as exceptional recording conditions, are necessary to actually approach 20 bits of measurable signal in the recording..


A pedantic point but...

If we are talking noise artifacts, current products seem to be claiming more like 21 bits.

In principle, the noise floor of converters can be reduced as desired by connecting them up in parallel with an approximate 3 dB gain for every doubling of the number of converters. This may sound silly, but in fact there is a commercial product that is composed of 8 converters in one package, and is speced for use as a single-channel device.

There's just no practical way to make a real-world live or studio recording with even 16 bits of dynamic range. In my investigations I have found some recordings that actually come close to 15 bits. Your typical live or studio recording, carefully done but not resorting to impractically extreme measures, will be in the 65-75 dB range. IOW 12-14 bits.

Then you have the problem of playing the recording back in a listening room with a typical 45 dB SPL noise floor without going where OSHA says there will be ear damage! ;-)
Go to the top of the page
+Quote Post
DonP
post Jun 3 2011, 16:52
Post #18





Group: Members (Donating)
Posts: 1469
Joined: 11-February 03
From: Vermont
Member No.: 4955



QUOTE (AndyH-ha @ Jun 2 2011, 23:06) *
QUOTE
To be clear, all modern DAW software processes audio at 32 bits floating point. I think Pro Tools uses 40 or 48 bit fixed point, but it's the same idea. .
It is not true that all modern editors automatically convert to floating point upon opening a 16 bit file. The intermediate results of individual calculations may well be in floating point, but, when working on a 16 bit file, the data goes back to 16 bit at the calculation's end. This means either you add dither for each or you accept the resulting distortion.


Audacity by default keeps and saves the audio in floating point. It only goes back to 16 fixed when you export to flac, wav, etc.
Soundforge, from what I gather, can do the same. I don't know if that's the default.

That covers 2 popular choices (home and pro.)

Go to the top of the page
+Quote Post
Axon
post Jun 3 2011, 20:58
Post #19





Group: Members (Donating)
Posts: 1984
Joined: 4-January 04
From: Austin, TX
Member No.: 10933



QUOTE (HTS @ Jun 3 2011, 02:27) *
QUOTE (Axon @ Jun 1 2011, 18:35) *
Generally, reverb, eq, mixing and gain changes are linear, while compression/limiting, modulation, and "static distortion" are nonlinear.

What are the exceptions? The Halion3 sampler does have this "nonlinear reverb" section, but other than that I can assume that the rest all follow the rule?

The "usual" or "naive" way you'd implement things like reverb/eq/mixing/etc solely involve the operations of time delay, summation, and constant gain changes (when you boil the algorithms down to the bare elements). These are all linear operations. There's nothing preventing a developer from devising a nonlinear algorithm to implement any of these, and depending on what they want to accomplish, that might be a good thing -- for instance, one could hypothesize a reverb engine that adds progressively more distortion to progressively longer echos, and find a good use for that (or perhaps even find a situation in the real world that one is actually trying to model).

All of the nonlinear examples I give intrinsically involve multiplication of two time-varying signals, and so are fundamentally nonlinear.
Go to the top of the page
+Quote Post
HTS
post Jun 4 2011, 03:39
Post #20





Group: Members
Posts: 356
Joined: 13-October 07
Member No.: 47799



I just found out that one of my sample libraries (a very prestegious one) was recorded in 16 bits then "remastered" into 24 bits. Do companies usually cheat like that? I suppose the Vienna Symphonic wouldn't but other companies like EastWest have a more sour reputation, and I'm wondering if their 1000 dollar libraries are recorded at 16bits then converted to 24bits?
Go to the top of the page
+Quote Post
AndyH-ha
post Jun 4 2011, 04:46
Post #21





Group: Members
Posts: 2192
Joined: 31-August 05
Member No.: 24222



I would think it depends upon the age. Once 16 bits was as good as one could get, then for awhile more than 16 bits required significantly more expensive equipment, but for some time now 24 bits has been drop dead cheap and there isn't any reason to record professionally at 16 bits.
Go to the top of the page
+Quote Post
yojig
post Jun 4 2011, 11:23
Post #22





Group: Members
Posts: 8
Joined: 12-July 04
Member No.: 15371



QUOTE (HTS @ Jun 1 2011, 17:21) *
http://www.hydrogenaudio.org/forums/index....st=#entry583387

1. The above thread here says something like we can convert 16bit files into 24bit files. Can I get an explanation of how this works? Magnifying a low pixel number image to a large size isn't the same thing as having a high resolution image. And upsampling a 44.1khz recording to 192khz is of the same token. Is the bit depth of a different category?

Sounds to me like bullsh*t.


QUOTE (HTS @ Jun 1 2011, 17:21) *
3. I read somehwere that microphones can only do 18 bit at most, that's why the CDs released back in the 1990s use and advertise 20bit recording. So basically they are saying recording at 24bit is a waste of storage space. Is this true? Many of the mics used to record the expensive sample libraries are not built with the newest technology, in fact many of them appear to be vintage mics that cost tens of thousands of dollars. Can these devices designed back in the 1940s take advantage of a 24bit recorder?

Bit depth directly affects you headroom. So if you want a deeper dynamics of recordings (e. g. want to record explosion without limiting peaks with all beauties) with any most botique and vintage mics you'll got better details capturing in 24 than in 16 bits.

This post has been edited by yojig: Jun 4 2011, 11:25
Go to the top of the page
+Quote Post
Arnold B. Kruege...
post Jun 4 2011, 11:58
Post #23





Group: Members
Posts: 3537
Joined: 29-October 08
From: USA, 48236
Member No.: 61311



QUOTE (AndyH-ha @ Jun 3 2011, 23:46) *
I would think it depends upon the age. Once 16 bits was as good as one could get, then for awhile more than 16 bits required significantly more expensive equipment, but for some time now 24 bits has been drop dead cheap and there isn't any reason to record professionally at 16 bits.


Sure there is a good reason to record at 16 bits: 32 channels for an hour and a half.

A one hour stereo recording at 16/44 fits on a CD. A 24 bit recording does not. Not even on 80 minute CD-Rs.

If there was some real benefit to recording at 24 bits, then the argument that it doesn't cost that much to do it has some merit. But there is no audible benefit.

Recording everything at 24 bits is like saying "I have lots of money so I will always pay twice list price for everything that I buy". If it floats your boat, by all means do it!
Go to the top of the page
+Quote Post
Dirk95100
post Jun 4 2011, 12:25
Post #24





Group: Members
Posts: 33
Joined: 15-October 10
Member No.: 84639



QUOTE (Arnold B. Krueger @ Jun 4 2011, 11:58) *
Recording everything at 24 bits is like saying "I have lots of money so I will always pay twice list price for everything that I buy". If it floats your boat, by all means do it!

So the music industry has money to burn? laugh.gif
Thank god those engineers donīt build bridges.
Go to the top of the page
+Quote Post
Ethan Winer
post Jun 4 2011, 16:41
Post #25





Group: Members
Posts: 248
Joined: 12-May 09
From: New Milford, CT
Member No.: 69730



QUOTE (DonP @ Jun 3 2011, 11:52) *
Audacity by default keeps and saves the audio in floating point. It only goes back to 16 fixed when you export to flac, wav, etc. Soundforge, from what I gather, can do the same.


Yes, and SONAR that I use does that too. Only when you do a final export is the 32-bit "result math" reduced to 16 or 24 bits.

In the grand scheme of things Arny is correct - you can't hear a difference. The main advantage of using 24 bits is it lets you be sloppy and careless when setting record levels. laugh.gif

--Ethan


--------------------
I believe in Truth, Justice, and the Scientific Method
Go to the top of the page
+Quote Post

2 Pages V   1 2 >
Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 24th April 2014 - 02:23