

The main complaint was abysmally slow download speeds, and occasionally even full-on timeouts. And users, especially users in Europe and Asia, were having trouble downloading our releases. Jellyfin is a global project, and while I’m personally located in Ontario, Canada, the very vast majority of our users are not. This server, located on Digital Ocean in the Toronto zone, housed both the build process as well as our main repository, served directly via NGiNX.īut this release was the first where we noticed a problem. Prelude - Pre-10.6.0īefore our 10.6.0 release, we had a fairly simple repository setup: everything would build on a VPS, running Debian, called build1, in response to a GitHub web-hook. And both for those interested, and for those supporting other similar projects, I’d like to share how we do it.

But at Jellyfin, we needed something more robust, something able to handle our needs more elegantly than GitHub or a basic web server could.
Mirrorsync archive#
Thanks in advance for your advice on creating an archive of selective log lines, possibly transforming them in the process, and maybe a native stats package.For many projects, distributing binary assets is easy: put the files on GitHub and you’re done. So far, PSPP looks like the most effective tool for this job. I have no experience with stats on linux so also looking for a recommendation on native stats package that's easy to invoke on the archive file. I'd like to analyze these offenders on a daily to weekly basis so looking to streamline this as much as possible, including simple statistical analysis over selective date periods via the command line. Ideally, I'd like to also transform the data into a regular, delimited format as it is copied to the archive file to examine the most prevalent ( frequency) offenders by date period. Otherwise, the solution will need to address the issue of the log being rotated and compressed periodically at an inconsistent time. I'm presuming a real-time solution is ideal since this will avoid the need to keep track of which entries have been copied and which haven't. The date is in an irregular format for statistical analysis and about half of the log line is not needed.The rotated file is gzipped and renamed: /var/log/.This log file is rotated every Sunday morning some time after 0300 the starting and ending times are inconsistent entries are commonly entered during this period.The log can receive upwards of 120 of these entries per day at any moment.Matching lines are easily identified by ' Blocked in csf' and '[AS'.6 hits in the last 230 seconds - *Blocked in csf* for 36000 secs Previously, I have copied the transformed data to Google Sheets for analysis via pivotable. The firewall allows blocking by country or ASN which has been a very effective method for reducing spam and port scanners, eliminating 96+% of undesired traffic.


I'm running CSF firewall and wish to archive log lines indicating a block for the purpose of periodic statistical analysis of prevalent network offenders based on ASN. Is there native functionality to copy|duplicate|sync lines from a specific log file matching specific search criterion to a separate data file for archival reference?
