Theoretically an array of nanowires has some advantages over flat or patterned silcon when it comes to light trapping and absorption. The disadvantages come from difficultly in getting the electron-hole pairs out of the wires, both in having sufficient conductivity of the wires and a good electrical contact to the outside world. On the negative side using patterning requires some sort of controlled deposition onto the thin-film substrate when ideally you would like to use some sort of self-assembled system that doesn't require a carefully patterned mask and has a huge through-put for manufacturing.
Figure 1 from (Kelzenburg et al. 2010): (a) SEM image of regular nanowire array embedded in PDMS and (b) schematic of system.
The arrays in question are rods 8-12 μm long that cover 5 % of the areal density of the surface. Let's compare that to a wafer-type Silicon photovoltaic cell, which might have 250 μm of high-quality Silicon in it, the Silicon nanowire PV cell is using about 0.24 % as much material. Since photovoltaic-grade Silicon is quite expensive, this is potentially a cost advantage. From a practical perspective, this concept is robust because the silicon nanowires are embedded in a polymer that protects them from damage.
The authors also added aluminium oxide (Al2O3) nanoparticles to the sides of their nanowires in an effort to increase scattering within the cell and hence light trapping. This had a very significant effect and they achieved a maximum of 84.6 % of light absorption compared to 87.2 % for a commercial cell. Remember this is with a tiny fraction of the amount of Silicon used in a commercial cell.
Figure 4 (from Kelzenburg et al., 2010): Compare the solid red to solid blue lines in (a). The nanowire arrangement is slightly inferior in the visible spectrum but is markedly superior in the near-infrared. In (b) area under the curves indicates total light absorption and hence electron generation.
Lastly, it was reported at the end of the supplementary information (where better place to hide such details?) that they saw some evidence of sub-bandgap absorption. That is, light longer than 1120 nm was being absorbed. This isn't supposed to happen and it tends to reflect parasitic absorption that doesn't contribute to moving electrons (and in fact reduces the output current). Thus there is some concern that the increase in infrared absorption — the main claim to fame here — was due to parasitic effects rather than something that would actually enhance the electric current being produced. They did not, however, find that all of their cells had sub-bandgap absorption so it may be largely a quality control problem.