Hello everyone,
I have problem figuring out somthing in the example model named "DC/DC Buck Converter" from "Support Package for Texas Instrument c2000 Processors".
In this model, inside the subsystem "PI_Controller_ISR", the output of ADC block is gone to two successive Data Type Conversion blocks. The ADC output is 12bit and therefore is a Integer from 0 to 4095. But the first conversion block is set to output a fixdt(0,16,12) number, which can be between 0 and 15.999755859375, with precision of 0.000244140625 (According to the Data Type Assistant in the block), and the second Conversion block is set to output a fixdt(1,32,24) (between -128 and 127.99999994039536 with precision of 5.960464477539063e-08.
Why these two conversion blocks are configured this way? I mean their input comes from the ADC and is an INTEGER between 0 and 4095. Why their output range is this way? they can not have output greater than 15.999 !!
Secondly, Why is the high fraction precision required while the input is integer?
And there is another thing: why two blocks are used? (the first one is set to Stored Integer (SI), and the second one is set on Real World Value (RWV). I don't know the difference and why it should be this way.
I really appreciate it if someone can help me in this regard. Thanks in advance for your answers.
Best Answer