There are two issues really - one is that different colour LEDs have very different forward voltages (so you cannot parallel them, period).
Secondly with identical LEDs from the same batch then the forward voltages are likely well matched. So when in parallel they will all light up - but the currents are not shared very equally because the current is an exponential function of voltage and individual devices vary slightly (in voltage - thus the currents vary a lot, perhaps a factor of 2 or 3 - due to the sharp exponential)
Because of this extreme sensitivity of current to voltage and because there is usually no guarantee your LEDs come from the same batch the standard advice is separate current limiting resistors - this means the currents will be well matched and you can run all the LEDs at full current safely.
With LEDs in parallel the variation in current means some get a lot more than the average so you can't run them at full brightness without overloading some of them - so you derate the average current and you are wasting LEDs - basically its rough-and-ready engineering.
There is a further reason not to parallel devices and that's self-heating. With high-power LEDs the LED chips heat up a lot, and at higher temperatures the current for a given forward voltage increases dramatically (forward voltage decreases with temperature). This can lead to secondary thermal-runaway - the hotest LED steals all the current available and gets even hotter, takes more current and ultimately melts - there is now more current available to the others and the process repeats till all fail. Sharing a heat-sink can reduce the severity of this effect, but its not a sensible way to design LED lighting...
The ideal way to power LEDs is through a constant-current source/sink for each string of LEDs. Then we know the current for each LED is set just as wanted - this is particularly useful if the power supply voltage can vary.
Using a fixed voltage supply and current-limiting resistor is a cheaper and nearly-as-good alternative, so long as some voltage is wasted across the resistor - variations between devices and in temperature do cause some current variation, but its not a problem (except for semiconductor lasers).
If you parallel LEDs then you have less control over the current variations, you have to derate them, and you need to use identical devices. This is generally a sub-standard approach (if you are powering the LEDs from a coin cell, note, then the battery itself is a current-limiting device so you can potentially guarantee no LED can overload). If you are just prototyping something and don't care that's fine by me, but think of it as a dirty short-cut!