For an import this large, you are likely best off using SkylineRunner, which avoids consuming extra memory until the final step of joining all of the individual .skyd files for your 200 raw files. That is where memory consumption will become an issue, but each individual raw data file should import just fine producing a separate .skyd file with little increase in memory use.
Also, when using SkylineRunner, Skyline does not need to maintain the user interface or any undo information, which also reduces memory consumption.
Yes, when you start to approach your memory limit everything is going to slow down because the operating system actually starts swapping memory to disk. The only solution to that is to find a way to use less memory or to find a computer with more memory.
You can also use the --memstamp argument to SkylineRunner to make it output memory use with its logging information, giving us more visibility into your memory consumption over time if you send us the log.
What kind of experiment is this? How many transitions does your document contain? I would guess DIA and over 100,000 transitions, given the experiments I have done myself, but if you have only a few 1000 transitions, then something else may be wrong, and we would be very interested in understanding why memory consumption is so high. What kind of raw data files are they? (Thermo? SCIEX? Waters? etc)
You can find helpful starting scripts for processing large data sets with SkylineRunner in resources for the Skyline Tutorial Webinars on processing large-scale DIA:
Thanks for reporting your issue to the Skyline support board.