Solved – the difference between probability and fuzzy logic

bayesfuzzy

I have been working with fuzzy logic (FL) for years and I know there are differences between FL and probability specially concerning the way FL deals with uncertainty. However, I would like to ask what more differences exist between FL and probability?

In other words, if I deal with probabilities (fusing information, aggregating knowledge), can I do the same with FL?

Best Answer

Perhaps you're already aware of this, but Chapters 3, 7 and 9 of George J. Klir, and Bo Yuan's Fuzzy Sets and Fuzzy Logic: Theory and Applications (1995) provide in-depth discussions on the differences between the fuzzy and probabilistic versions of uncertainty, as well as several other types related to Evidence Theory, possibility distributions, etc. It is chock-full of formulas for measuring fuzziness (uncertainties in measurement scales) and probabilistic uncertainty (variants of Shannon's Entropy, etc.), plus a few for aggregating across these various types of uncertainty. There are also a few chapters on aggregating fuzzy numbers, fuzzy equations and fuzzy logic statements that you may find helpful. I translated a lot of these formulas into code, but am still learning the ropes as far as the math goes, so I'll let Klir and Yuan do the talking. :) I was able to pick up a used copy for $5 a few months back. Klir also wrote a follow-up book on Uncertainty around 2004, which I have yet to read. (My apologies if this thread is too old to respond to - I'm still learning the forum etiquette).

Edited to add: I’m not sure which of the differences between fuzzy and probabilistic uncertainty the OP was already aware of and which he needed more info on, or what types of aggregations he meant, so I’ll just provide a list of some differences I gleaned from Klir and Yuan, off the top of my head. The gist is that yes, you can fuse fuzzy numbers, measures, etc. together, even with probabilities – but it quickly becomes very complex, albeit still quite useful.

  1. Fuzzy set uncertainty measures a completely different quantity than probability and its measures of uncertainty, like the Hartley Function (for nonspecificity) or Shannon's Entropy. Fuzziness and probabilistic uncertainty don't affect each other at all. There are a whole range of measures of fuzziness available, which quantify uncertainty in measurement boundaries (this is tangential to the measurement uncertainties normally discussed on CrossValidated, but not identical). The "fuzz" is added mainly in situations where it would be helpful to treat an ordinal variable as continuous, none of which has much to do with probabilities.

  2. Nevertheless, fuzzy sets and probabilities can be combined in myriad ways - such as adding fuzzy boundaries on probability values, or assessing the probability of a value or logical statement falling within a fuzzy range. This leads to a huge, wide-ranging taxonomy of combinations (which is one of the reasons I didn't include specifics before my first edit).

  3. As far as aggregation goes, the measures of fuzziness and entropic measures of probabilistic uncertainty can sometimes be summed together to give total measures of uncertainty.

  4. To add another level of complexity. fuzzy logic, numbers and sets can all be aggregated, which can affect the amount of resulting uncertainty. Klir and Yuan say the math can get really difficult for these tasks and since equation translations are one of my weak points (so far), I won't comment further. I just know these methods are presented in their book.

  5. Fuzzy logic, numbers, sets etc. are often chained together in a way probabilities aren't, which can complicate computation of the total uncertainty. For example, a computer programmer working in a Behavioral-Driven Development (BDD) system might translate a user's statement that "around half of these objects are black" into a fuzzy statement (around) about a fuzzy number (half). That would entail combining two different fuzzy objects to derive the measure of fuzziness for the whole thing.

  6. Sigma counts are more important in aggregating fuzzy objects than the kind of ordinary counts used in statistics. These are always less than the ordinary "crisp" count, because the membership functions that define fuzzy sets (which are always on the 0 to 1 scale) measure partial membership, so that a record with a score of 0.25 only counts as a quarter of a record.

  7. All of the above gives rise to a really complex set of fuzzy statistics, statistics on fuzzy sets, fuzzy statements about fuzzy sets, etc. If we're combining probabilities and fuzzy sets together, now we have to consider whether to use one of several different types of fuzzy variances, for example.

  8. Alpha cuts are a prominent feature of fuzzy set math, including the formulas for calculating uncertainties. They divide datasets into nested sets based on the values of the membership functions. I haven't yet encountered a similar concept with probabilities, but keep in mind that I’m still learning the ropes.

  9. Fuzzy sets can be interpreted in nuanced ways that produce the possibility distributions and belief scores used in fields like Evidence Theory, which includes the subtle concept of probability mass assignments. I liken it to the way in which conditional probabilities etc. can be reinterpreted as Bayesian priors and posteriors. This leads to separate definitions of fuzzy, nonspecificity and entropic uncertainty, although the formulas are obviously similar. They also give rise to strife, discord and conflict measures, which are additional forms of uncertainty that can be summed together with ordinary nonspecificity, fuzziness and entropy.

  10. Common probabilistic concepts like the Principle of Maximum Entropy are still operative, but sometimes require tweaking. I'm still trying to master the ordinary versions of them, so I can't say more than to point out that I know the tweaks exist.

The long and the short of it is that these two distinct types of uncertainty can be aggregated, but that this quickly blows up into a whole taxonomy of fuzzy objects and stats based on them, all of which can affect the otherwise simple calculations. I don't even have room here to address the whole smorgasbord of fuzzy formulas for intersections and unions. These include T-norms and T-conorms that are sometimes used in the above calculations of uncertainty. I can't provide a simple answer, but that's not just due to inexperience - even 20 years after Klir and Yuan wrote, a lot of the math and use cases for things still don’t seem settled. For example, there I can’t find a clear, general guide on which T-conorms and T-norms to use in particular situations. Nevertheless, that will affect any aggregation of the uncertainties. I can look up specific formulas for some of these if you'd like; I coded some of them recently so they're still somewhat fresh. On the other hand, I’m an amateur with rusty math skills, so you’d probably be better off consulting these sources directly. I hope this edit is of use; if you need more clarification/info, let me know. 

Related Question