At my previous job, we struggled mightily with reading and writing wide iceberg tables with Spark. Watching this Future Data Systems talk by Russell Spitzer, I think I finally understand why. He mentions it almost as a footnote, in response to a question about the Iceberg REST catalog:
One of the most common problems that people have with really wide tables…the way Parquet is constructed, even though they have columnar representations of your data, you have to keep all of the column vectors for the same row group in the same file…your files are very wide and your columns end up being very short to end up in the same file
