The problem is that it's not a good idea to control the brightness of a LED by varying its voltage.
Take a look at the attached exemplary datasheet, which is taken from a warmwhite LED from Nichia. As you can see from the right figure, the brightness depends nearly linear on the current. That is, if you increased the current from 0 to max, the brightness of the LED would fade in the same way.
The left figure shows the dependency of the current on the voltage. It is an exponential one! This means, even if you increase the voltage only a little bit, the current increases very quickly - and so does the brightness. This is what you are doing with your circuit. The resistor in series to the LED linearizes the behavior a little, but the better way is to use a current source which is controlled by a voltage. There are lots of examples in the internet.
All the theory would be negligible if the PWM pulses were nicely shaped rectangle pulses. Then the voltage would pulse the current and so the brightness. But, as posted by johnwasser, capacitors can screw up the game. Even if you not use any, there are always parasitic ones. And then you have to cope with the nonlinear behavior of the current.
And, as posted by dc42, LEDs should not simply be connected in parallel, at least not if you want them to have the same brightness. Connecting them in parallel, you force all LEDs to have the same voltage. Because of the steep slope of the current curve together with slight differences in the characteristics of each LED, the resultant currents and thus the luminosities might differ. If you instead connect the LEDs in series, all of them will have the same current, but then the voltage has to be sufficient high.