My experience is mostly with Linux systems. I find that it is easy to become overwhelmed by the amount of information written to log files. Knowing what to look for is highly valuable. If there's a specific problem you're trying to diagnose, reproducing it while you run a tail command on the logs is an easy way to capture the right log entries. Also, it's often easier to find the important error messages by starting at the end of the relevant logs and working backward.
You can also filter out low-priority log messages, so that only warnings and errors are shown. In Linux/UNIX, grep can do this; you can also adjust the log level of the application. Also, you may find it helpful to display only the fields of interest in each log entry.
For performance monitoring, the Linux tools will generally give you numerical data, and you can sort the results so that it is easy to discover, for example, that a certain process is consuming a lot of virtual memory.
On a Windows machine, I expect similar techniques to apply, but you'll probably need PowerShell commands in order to gain the greatest benefit.
-----Original Message-----
From: Zee