Can someone explain in simple terms how the Clocking Wizard chooses the suggested configuration to produce a given frequency out of the many possibilities? I understand the calculation and constraints, but my question is really, given two valid configurations that will produce the desired output frequency, what are the rules of thumb to choose one configuration over another?

PG065 provides some details about the Clocking Wizard. UG472 the Series7 Clocking User Guide also has some useful information. Obviously, the place to direct a question like "How is the output jitter calculated?" is to the AMD/Xilinx Community site. I doubt that you'll get the answer that you are looking for but it might be worth trying.


Download Mmcm Ps3 Cfw


Download Zip 🔥 https://urlgoal.com/2yGcjN 🔥



Obviously, it's possible to have a different set of multipy and divide ratios that can create the same output clock frequency from a particular input clock frequency. A lower VCO frequency would likely have a lower power dissipation than a higher one. You also have the choice of MMCM or PLL

If Xilinx has ever published an exhaustive description of how the input jitter filter works I've not come across it. There are a lot of details about programmable logic that vendors might see no upside to publishing and a lot of downsides. Engineers usually don't make this determination.

Anyone can read though the vast quantity of documentation, errata, and engineering notices to find information that isn't easy to run into. Your question would seem to be one that is best answered from the source rather than from random posters like me. If you work for a company that is a AMD/Xilinx customer there' the possibility of getting information, through an NDA, that isn't going to be made available to the general public.

I.have had some success in the past by measuring behavior, rather than trying to do exegesis from the (limited) documentation provided by Xilinx. I'm lucky enough to have a Keysight 53230A at my disposal that can measure jitter-related parameters in the picosecond range, and there you indeed see differences in behavior when testing multiple configuration options that produce the same frequency. There is no substitute for just looking how the damn thing works in real life, but the equipment to do that is unfortunately quite expensive. If you do this for work and it's important, perhaps you can pitch the acquisition of such a device.

Personally I do not rely on the clocking wizard in Vivado, I instead took a deep dive in the MCMM/PLL documentation provided by Xilinx to get a useful-enough mental model to understand how these things work under the hood; and I do my instantiation of MCMMs and PLLs explicitly in my VHDL code, setting all parameters by hand. I also wrote a few Python scripts that allow me to figure out different options for how to make clock frequencies given the resources available -- often times you will need to make chains of multiple MMCMs/PLLs. As it turns out, most of the time you end up with merely a handful of serious options, and if low jitter is truly important (which it often isn't) I just measure to select the most appropriate one.

Thanks for the info. I have read about the clocking resources, and I have written an app that can enumerate all the configurations to achieve a given frequency. The question is given two configurations, which one is better (where I define better as lower jitter)? For example, is a higher VCO frequency better for output jitter? Are integer divisors better than fractional divisors? It is not clear to me from the docs.

The application is digital audio. When transmitting digital audio, it is desirable to minimize jitter on the signal. My thinking is that if I have two possible configurations to achieve a desired output frequency, then I might as well choose the configuration that provides the lower jitter.

In the clocking wizard, changing the jitter optimization between "minimize output jitter" and "balanced" options, causes the mmcm configuration to change, and even to change the output clock frequency sometimes.

A bit of googling appears to show that the perception threshold of humans for jitter is in the order of a few hundreds of nanoseconds (ref: _Detection_threshold_for_distortions_due_to_jitter_on_digital_audio), and the jitter reported by the clocking wizard is three orders of magnitude below that.


I know that jitter in the audiophile world is a big thing but it could be that this is more of a persistent cult belief stemming from the early days of digital audio than something that's supported by proper testing with modern clocks and DACs. So while I think it is interesting to think about these things, and I have been involved in work where it actually does matter, it could be that the difference between 200 and 500 ps of jitter is irrelevant.


If I was to compare different options I would not put a lot of value in the numbers output by the clocking wizard. The model they use may suck, and/or it may have unrealistic assumptions. We have no way of knowing because it's not publicly documented, and the extent to which it was validated is also unknown. In cases like this I'd rather trust a measurement than a spec.


EDIT - one problem with the paper I linked to, but also with the jitter reported by the Clocking Wizard, is that they only focus on uncorrellated sample-to-sample jitter which, it turns out, is way below human perception. But jitter is timescale-dependent (see Allan variance), so it's effectively a spectrum of timescale vs deviation; and the usefulness of reporting on / optimizing for only the shortest timescale (edge-to-edge) is probably somewhat misguided when thinking about audio perception. It may well be that human perception is sensitive to jitter on other timescales, but that would take a different experiment and a different clocking wizard to figure out.


Still, what remains is that the edge-to-edge jitter as reported by the clocking tool is a few orders of magnitude below what's perceptible, so it is probably safe to say that the numbers reported there indicate that the difference between the solutions can be ignored for the purpose of audio.

If you want to get down and dirty with conrolling clocking you can use the MMCM_DRP module. You can use this method with an HDL though the documentation suggests that you need a processor and AXI bus.


I ran into this from XAPP888:


Filter Group

This group cannot be calculated and is based on lookup tables created from device

characterization. There are effectively two tables, one for each bandwidth setting. The feedback

divider setting (CLKFBOUT_MULT) acts as the index to the chosen table. There are three

bandwidth settings allowable in the tools (High, Low, and Optimized), but in effect there are

only two. High and Optimized use the same table, while the Low bandwidth setting uses a

separate table. The filter group has an effect on the phase skew and the jitter filtering capability

of the MMCM. The lookup table is located in the reference design within mmcm_drp_func.h.


The Series 7 devices are pretty complicated. Finding obscure details can be a lengthy journey without a good map. I suppose it boils down to how well one can speed-read though a lot of documents that are hard to find and notice the right reference.


I suppose that if you have access to the right test gear you can infer internal clock jitter performance from an output pin, but few of us do. Personally, the clocking wizard is one of the few FPGA device IP that I use consistently, regardless of vendor, though I rarely have a need to calculate clock jitter for applications that I do. If you are using an advanced audio standard like AES3 I suppose tha you need to. Most people don't even bother to verify that the particular clock module on a board meets the default input clock jitter setting. I would suggest the external clock source is where you need to start to analyze jitter at some downstream point in a system. It can be interesting to change that default jitter value for clk_in and see what output clock options the wizard provides.

The reference above points out a reality that we don't usually consider. The end result of how your FPGA performs is not just a function of the device architecture and internal design, but is a function of how the tools manipulate that hardware to provide an outcome that is consistent with what the documentation says. For some device features this is more apparent than for others. I've noticed what appears to be more control in the tools over how the hardware performs in the UltraScale devices. This has two consequences for users. One, is that it's harder for users to use certain features in a fine-grained way. Two, the vendor has more control over what the user can do with the device by following published documentation. The result is that user is more dependent on tool IP that doesn't reveal source code. Maybe I'm the only person to see a trend here....

Can you elaborate on this? Do you have empirical data that has driven this conclusion? I'm not trying t be snarky... sometimes device vendors supply bad documentation or not enough documentation. Sometimes devices in a particular flavor have bugs. I'm curious.

This statement reminded me of an experience that I had as an engineering student working as an intern at a power plant many years ago. One of the engineers was an audiophile. I remember him showing off his very expensive turntable and audio gear to me and elaborating on the nuanced details that he could pick out with the new gear as opposed to lesser gear. A few weeks later everyone at the plant was given a hearing exam a part of the company wide directive. Turns out that the man was deaf beyond 0.5-12 KHz. Coal fired power plants are places with a high level of constant background noise and occasional peaks well above that. He should have worn his hearing protection.... 152ee80cbc

download william hill betting app

truck driver cargo game download for pc

how to download background in pivot animator