Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Feeding filterless DAC with SOX (Read 28283 times) previous topic - next topic
0 Members and 7 Guests are viewing this topic.

Feeding filterless DAC with SOX

Reply #75
Actually, it corresponds perfectly to the wikipedia definition:
http://en.wikipedia.org/wiki/Oversampling

Only if beta can be a non-integer value

Who says that it cannot?
Quote
, since 44.1 does not divide evenly into 192.

44.1 is not an integer. Why would you want the number of samples within 1ms to match? Or the number of samples within 1 second, for that matter? The point is that 44100 CD-audio can be resampled to 192kHz by using readily available tools. This can be considered a "partial" D/A conversion, the DAC with its digital/analog filters would have to do the rest.

-k

Feeding filterless DAC with SOX

Reply #76
You really have two possible options here:

1)  Learn how DACs work so that you understand whats going on.
2)  Buy a device with the best performance and just assume its doing the right thing because it works well.

But you won't be able to outsmart the people who design these devices without first understanding how they work at a very detailed level.  You won't get that from one or two papers or asking questions on a forum.  You'll need textbooks, to pour over the math, and probably going through that matlab code I posted until you understand what is going on wouldn't hurt either.

Feeding filterless DAC with SOX

Reply #77
I'm not an expert in 80s MOSFETs, but I suspect the real limitation there was just the sampling rate.  In that area high end digital logic was in the 5-10MHz range.  Running sensitive analog bits in a DAC at MHz rates may not have been possible on the logic of the day.  Thats just speculation though.  It could as well have been that even fixed point multiply logic was too expensive to for that era.  Looking at some old datasheets might give a clue.

I don't know much about the physical mechanisms in DACs, I have only calculated theoretical systems.

I always assumed that the popularity of oversampling (and delta-sigma, one-bit/few-bit DACs) had to (in part) be that it was relatively easy to make something that generated voltages at a rapid and precise rate (especially if it flipped between two (or a few) voltages), while making something that directly hit the accurate voltage at a low temporal rate (and making it smooth) was hard. To me it sounds like extending the "digital" path of the DAC as far as possible, and making the "analog" path as short and simple as possible.

-k

Feeding filterless DAC with SOX

Reply #78
I always assumed that the popularity of oversampling (and delta-sigma, one-bit/few-bit DACs) had to be that it was easier to make something that flipped between two (or a few) voltages at a rapid and precise rate, than to make something that hit the accurate voltage at a low temporal rate.


That may have been part of it.  Lower oversampling ratios help with the passband flatness, but don't do much for SNR.  And if you can't scale the oversampling way up and take advantage of noise shaping, maybe there was just no justification for going above 4 or 8x until the 90s. 

Feeding filterless DAC with SOX

Reply #79
44.1 is not an integer.

The sample rate doesn't have to be, but the oversampling factor has historically been and AFAICT does have to be.

If it wasn't then multiple terms would likely not have entered the lexicon.

Oversampling was successfully done in inexpensive consumer devices a good ten years before SRC was.

As I attempted to intimate, the names have an historical basis.  Just because oversampling, as a process that was done in the '80s and early '90s, may no longer be relevant doesn't mean you get to redefine the term or deem it irrelevant, especially when someone is inquiring about differences in terminology.

Feeding filterless DAC with SOX

Reply #80
Oversampling by an integer ratio is much more efficient than a rational ratio because the order of the low pass filter in the rational case will have to be increased by a factor of the denominator of the ratio to maintain equal sharpness.  Other than that I don't think theres anything special about integer vs. rational ratios.  An integer is just a special case of a rational.  You'd obviously never do the rational case if an integer would work because its more expensive, but if you needed to you could.

Feeding filterless DAC with SOX

Reply #81
Is it just me or is the data sheet for the TDA1540 chip very sparse in specifications useful for audio? What I do see is that it is only 14 bits (so you are throwing away 2 bits of audio data), the S/N ratio is a very disappointing 85 dB, and there are no specifications for linearity or IM distortion. Since the chip uses an internal divider network, I suspect that linearity is probably its bigest weakness.

Of course, all of this is moot if the analog filter is not up to the task, and the data sheet calls for a 9th order filter.

All of this makes for a product that is probably overpriced while being distinctly inferior from a performance standpoint.

Feeding filterless DAC with SOX

Reply #82
Is it just me or is the data sheet for the TDA1540 chip very sparse in specifications useful for audio? What I do see is that it is only 14 bits (so you are throwing away 2 bits of audio data), the S/N ratio is a very disappointing 85 dB, and there are no specifications for linearity or IM distortion. Since the chip uses an internal divider network, I suspect that linearity is probably its bigest weakness.

Of course, all of this is moot if the analog filter is not up to the task, and the data sheet calls for a 9th order filter.

All of this makes for a product that is probably overpriced while being distinctly inferior from a performance standpoint.


It is actually a little surprising that enough idiots exist to justify using a 30 year old IC in a newish design.  Where do you even get the chips from?  I think that part would be pre-CMOS.  I don't even know if you could pay someone to fab it, so I guess theres a crate of ICs in someone's basement?

Feeding filterless DAC with SOX

Reply #83
Yeah, it should have tipped me off when it said the inputs were TTL compatible. 

Feeding filterless DAC with SOX

Reply #84
44.1 is not an integer.

The sample rate doesn't have to be, but the oversampling factor has historically been and AFAICT does have to be.

The oversampling factor can be any number >1, AFAIK. Choosing fractions that go well together simply allows you to limit the number of phases in your polyphase filterbank, which saves memory and allows you to fine-tune every phase, as opposed to more generic U/D fractional resamplers or continous-time resamplers (see e.g. Julius Orion Smith IIIs webpage).

The case described by the OP does fit with the wikipedia definition of oversampling, and with the problem that oversampling tries to solve. If you don't want it to be oversampling, then fine, call it whatever you like. Let us get the facts clearly understood anyways.
Quote
If it wasn't then multiple terms would likely not have entered the lexicon.

There can be many reasons why our dictionary contains synonyms.
Quote
Oversampling was successfully done in inexpensive consumer devices a good ten years before SRC was.

As I attempted to intimate, the names have an historical basis.  Just because oversampling, as a process that was done in the '80s and early '90s, may no longer be relevant doesn't mean you get to redefine the term or deem it irrelevant, especially when someone is inquiring about differences in terminology.

Frankly, I don't get what you want to contribute. You seem to be hung up on terminology that you cannot document, offer technical explanations that are plainly wrong. And on at least two occasions, you do this with great apparent self-confidence.

-k

Feeding filterless DAC with SOX

Reply #85
Because I was wrong on one point ("no digital filtering necessary") and happen to disagree with you on the other means I'm not contributing to the discussion?

The idea that there was a dedicated term for 2x, 4x, 8x up-sampling was floated before I ever posted, yet it definitely had to have come out of my ass, after all.

As to what I want to contribute, ignoring those which weren't on this particular point, I'm satisfied with the elicited responses to correct me which ultimately resulted in greater knowledge being shared with the OP.

Nah, I'm sure it's clear to everyone that I'm a useless idiot who is only here to be disagreeable.  Who am I kidding?


Feeding filterless DAC with SOX

Reply #86
All of this makes for a product that is probably overpriced while being distinctly inferior from a performance standpoint.


So its literally a piece of junk then (what we were trying to tell you!/quote in advance lol)

I appreciate the input on this thread it has helped, though I don't think I will delve into the mathematics, being it another weakness of mine lol. I'm more creative individual. I thought it would be interesting to approach HA to discuss sampling to steer me in right direction with a greater prosumer (in between consumer and genius) understanding on the matter.

I would like to ask about another thing, sampling related of course.

I noticed on a particular DAC the specification states "output sample rate", as opposed to the typical 'Input Sample Rates supported' normally seen, though this could be just bad translation. If it is and this device were to accept such a rate, can/does the device still over-sample xN? i.e 352.8 times eight!?

sidenote
I also have an old Yamaha Card (SW1000XG) from 98' and was intrigued to find it ONLY accepts 41100. It has an 18-bit D/A chip on-board.
D63200 Datasheet if anyone's interested in looking at that.
Going by the above chip spec (which mentions 8fs on p1 among the Japanese lol) is it safe to say this card most likely only over-samples 44100 x8?

Feeding filterless DAC with SOX

Reply #87
I appreciate the input on this thread it has helped, though I don't think I will delve into the mathematics, being it another weakness of mine lol.


Then basically this thread is a waste of your time.  What you care about is performance apparently, and you can generally find that elsewhere without regard for how any of these devices work.

I noticed on a particular DAC the specification states "output sample rate", as opposed to the typical 'Input Sample Rates supported' normally seen, though this could be just bad translation. If it is and this device were to accept such a rate, can/does the device still over-sample xN? i.e 352.8 times eight!?


For like the 3rd or 4th time, typical devices are running at a couple MHz or more internally.  The sample rate you input has basically no impact on that.  At 384Khz mode its probably running exactly the same as 192khz mode except with half the oversampling rate.   

I also have an old Yamaha Card (SW1000XG) from 98' and was intrigued to find it ONLY accepts 41100. It has an 18-bit D/A chip on-board.
D63200 Datasheet if anyone's interested in looking at that.
Going by the above chip spec (which mentions 8fs on p1 among the Japanese lol) is it safe to say this card most likely only over-samples 44100 x8?


I'd guess from that text that you can switch between 4 and 8x oversampling.  The max sampling rate is said to be 400kHz, which is probably the oversampled rate, so I guess its meant to do 44.1/48k @ 8x and 88.1/96k @ 4x.  The selection pin is so that you can adjust the ratio when changing the clock to avoid running out of spec. 

Feeding filterless DAC with SOX

Reply #88
I appreciate the input on this thread it has helped, though I don't think I will delve into the mathematics, being it another weakness of mine lol. I'm more creative individual. I thought it would be interesting to approach HA to discuss sampling to steer me in right direction with a greater prosumer (in between consumer and genius) understanding on the matter.

This is my semi-educated non-mathematical position on your subject.

1. It does not matter - all DACs are transparent*)

2. Those that sell audiophile equipment often cannot be trusted. A large percentage of them are shady companies.

3. Those that do DIY audio without blind tests often cannot be trusted. They may be idealists, but that does not stop them from spreading erroneous conclusions.

good luck!

-k

*)I am sure that someone can find a corner case where I am proven wrong, but the general non-scientific, non-mathematic answer seems to be right in this context.

-k

Feeding filterless DAC with SOX

Reply #89
44.1 is not an integer.
44100 is.

How the input and output samples are related in time is crucial to the practical design of the device. Most simple devices didn't calculate the sync filter coefficients internally, they had them in a look-up table. In that table, you only need the filter coefficient values that relate to the time difference between each output sample and the near-ish input samples. For integer:1 upsample ratios you have the same set of values for each output sample vs the set of near-ish input samples. For simple fractions, you may have a small handful of sets to cover all possibilities. A simple diagram would clarify this hugely, but I don't have time to draw one.

You don't waste memory storing values in the look-up table for any sub-sample time relations that are never needed. If you're working with input samples that are only ever 1/8th, 2/8th, 3/8th, 4/8th, 5/8th, 6/8th, 7/8th or 8/8th  of an input sample away from the output sample, you don't need the filter coefficients for a sample that's 7/16ths away.

However, for arbitrary ratios you do need the ability to calculate the sync filter, either directly by maths, or by interpolating from the nearest values stored in a fine-grained look up table.


zero stuffing then filtering is one approach to increasing the sample rate. filtering then throwing away (decimating) samples is one approach to decreasing the sample rate. Neither works for arbitrary ratios, only integer:1 sample rate multipliers or dividers. You can of course use the approach with arbitrary ratios, but because you can only add or throw away whole samples, you introduce repetitive jitter by rounding to the nearest sample (e.g. if you really needed to zero stuff by 7.5 samples, you'd have 7 zeros, original sample, 8 zeros, original sample, 7 zeros, etc etc to get the average to 7.5 samples, but this moves/jitters each original sample +/-0.5 samples away from where it should be), and that manifests itself as distortion.

To do it properly for arbitrary ratios, it's better to forget the concept of zero stuffing, and just calculate the output samples from the input samples using the sync filter. Zero stuffing followed by a sync filter is actually a two-stage restricted use but conceptually simple way of doing exactly this (but only ideal for integer:1 ratios).

Zero stuffing to the lowest common multiple is conceptually simple, works for arbitrary ratios, and if you only calculate the samples you'll need after subsequent decimation becomes numerically equivalent to using the sync filter directly.


I'm not sure I'd want to argue too strongly over the terms, because (like many terms) they are ones that have lost some of their meaning through misuse. But the integer:1 ratios lend themselves to a technique that arbitrary ratios do not.

I bet some here know some clever implementation tricks that blur the distinction further, but I think this is already confusing enough.

Cheers,
David.