[Kuroi-Subs] Tower of God - 05 [HEVC 1080p] [JAP+KOR] [8DDC39E9] :: Nyaa ISS

[Kuroi-Subs] Tower of God - 05 [HEVC 1080p] [JAP+KOR] [8DDC39E9]

Category:
Date:
2020-05-05 22:54 UTC
Submitter:
Seeders:
0
Information:
No information.
Leechers:
1
File size:
748.8 MiB
Completed:
238
Info hash:
23848bb381ae722081b28e4f18e9a00e8fe0c608
Video: [CR] → [10-bit, HEVC, CRF 16] (Very slow Compression) Audio: [OPUS 2.0, 128 kbps] * Track 1: [CR] JPN * Track 2: [Squirrel] KOR → [Synced] Subtitles: [CR] → [SSA, Timed, TS'ed & Improved TL] * Track 1: Timed & TL for JPN. * Track 2: Timed & TL for KOR. --- #### **Release Note:** I will be covering episodes 01-04 and 06-13. Probably 2-3 days for already released episodes and 1-2 days for upcoming episodes. #### **Subtitles Note:** 1. I've done what any fansub group would do but OP/ED are not covered so far. I'd appreciate if anyone could step up and help me out on this. Otherwise, I'll try to come up with something during the weekend(s). I do want to kfx both JPN & KOR versions but I'll see if I can get the time to do any of that. 2. As you can see above, there are 2 subtitle tracks; one for each audio track. TL & Timing is different in each track. Similarly, the naming conventions (honorifics) are followed according the the language. For example, MC name in JPN is **Yoru** while in KOR it's **Bam**. #### **Group Note:** Though the release is labelled under group name, I'm the only one currently working on it. The reason behind using the name is because I used the same name for releasing **Gintama' Enchousen** and **Gintama: Yorozuya Eien Ni Nare** (Movie) back in the day. I don't have any plan to release / cover any other show except **God of High School** releasing in July 2020, but ONLY if no other group picks it up. Having said that, I'd appreciate if someone does join, or contact, me for covering the OP/ED kfx of this show. You may contact me here or over this [discord](https://discord.gg/2VwXtUr), which is my natural habitat. --- **P.S:** KOR dub is pretty cool. Korean VA of Khun is a treat to ears.

File list

  • [Kuroi-Subs]_Tower_of_God_-_05_[HEVC_1080p]_[JAP+KOR]_[8DDC39E9].mkv (748.8 MiB)
why would you re-encode lossy audio
Can't wait to check this out. Even the KOR audio should be fun. Edit: Woah...KOR OP song rocks much harder. I didn't really expect that. Edit 2: KOR tracks were neat. I prefer the KOR VAs for Rachel and Shibisu in particular. Reading the KOR honorifics made me nostalgic for the early scanlations by The Company. Excellent work. I'm really looking forward to more.

zeust (uploader)

User
@herkz For KOR audio, it was encoded from ~250 Kbps (AAC) to 128 kbps (OPUS) which saves space, though insignificant. OPUS 128k gives same quality as 256k of AAC and since both are lossy, why not just shed some weight off of the audio? Again, may seem insignificant to some. For JPN audio, there was no gain as it was already in 128 kbps (AAC), so OPUS 128 kbps will still have the same size & quality. For the sake of consistency, I did 'em both in OPUS.
i regret asking
out of curiosity how many space can audio take in normal episode ?
You don't just shed of some weight. What you get by encoding lossy audio again is, you get another layer of lossiness. Your text reads as if you have misunderstood what you are talking about. If you had a lossless source and would encode it in two different ways (256k AAC and 128k OPUS) it would be a completely different story. @Aryma I have deleted some english dub tracks from my anime and stereo FLAC was ~200MB and AAC/OPUS/AC3 ~75MB.
Ok, so you did: 256kps AAC > 128kps Opus Quality reduces: 30% Overall space saved: 3% (or 24MB) 128kps AAC > 128kps Opus Quality reduces: 5% Overall space saved: 0% Not much space was saved, but the audio quality reduced more. It is also bad practice to re-encode already lossy audio, as others have said. If you wanted to save space you can change your video compression to go: CR > HEVC CRF 18 (instead of 16): Quality reduces: 5-10% (over CRF 16) Overall space saved: 16% (we assume a 120MB reduction in video, also CRF18 at 1080p still looks pretty good for anime)

zeust (uploader)

User
@Fiddlel @junh1024 can you back up your statements with actual data? Not starting a refuting argument but would be nice if, for example, @Fiddlel can explain how encoding AAC -> OPUS becomes a layered lossiness? Last I knew, OPUS can maintain the same quality in 128kbps that an AAC codec would retain in 256k. If you can illustrate me with sections on the audio where OPUS sounds buggy as compared to original (lossy) source, I'd like to re-consider my choice for audio encoding. Building on the logic, AAC (lossy) -> OPUS (lossy) means the OPUS would have low quality? Could be, if the bitrate is insufficient. But not if bitrate is good enough for OPUS to maintain the quality. I understand that assigning more bitrate (during re-encode) wouldn't make the audio sound better out of blue but there are codecs like OPUS that are known for maintaining the quality amazingly, though they are lossy. I'm only saying this after running my own tests for so many videos over the years. Similarly, @junh1024, you've mentioned 256kps AAC > 128kps Opus (30% reduction) and 128kps AAC > 128kps Opus (5% reduction). Can you link me to the source of this data? What is meant by reduction in quality, how is this calculated and can this be reproducible? I'm good at what I do but you learn new things everyday. Same for everyone. I'd appreciate, if people can provide me with appropriate source with reasoning that makes sense to what they are stating. Consequently, I'd re-consider this for coming release (of this show).
lossy encoding throws away (relatively) unimportant data when made from a lossless source. when you make a lossy encode from a lossy source, it throws away important data since the unimportant data is already gone.
Is the TL for the KOR audio official or fansubs?
@zeust It's on you to backup your statements here. Source on how you do NOT lose quality by reencoding into a lossy format ? That's just how it works, it's written in every program manual that does this kind of stuff, and that's the first thing you learn in every compression theory course. The only way to maintain quality is to either copy, or reencode into a lossless format (which will increase the size). The only way to increase quality is to "make it up" through manual intervention or AI (think about how decensoring mosaics work, the result depends on the artist who redraws the whole thing according to his own imagination). We're in the real world here, not in a pseudoscientific police investigation movie. Of course the quality loss can be minimal but you're not getting anywhere even if the second encoder is a million times better than the first one.
I am too lazy to look for "data" which backups my claim. Whatever that data might be. You don't even need data for this, if you mean measured data as in statistics. You can just open up a detailed explanation on how lossless encoding works and read it for yourself. "When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies." https://www.wikiwand.com/en/Data_compression "Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation." https://www.wikiwand.com/en/Generation_loss This is just Wikipedia and not some scientific literature but my guess is that this is enough to prove my point.
@zeust - Please don't get discouraged. Learn what you can from the technical criticism, or ignore it if you want...because you've made a fun release and no one else is doing anything comparable, let alone better. More, please.
I agree with man00ver. Doing this is good. I just chimed in because you were misinformed on the technical part.

zeust (uploader)

User
@suou KOR dub is official. It's a pretty neat work.

zeust (uploader)

User
I understand what lossy encoding is but I don't understand why lossy -> lossy encoding would be criticized without backing it up with any testing? From what I can understand from all of the different statements is that if I do x264 (lossy) -> HEVC (lossy), I can never attain the same quality of what x264 had no matter what control parameters I apply on HEVC, right? Why are you all discussing how lossy encoding is going to decrease quality and not taking into account the quality parameters given for any encoder? Let's consider that there are 2 videos with same quality, one in x264 and other in HEVC. Theoretically, they will have different size because of their algo difference and HEVC will have eradicated more data from the video because HEVC can retain quality in smaller size than x264, right? That's the whole point of HEVC (or any newer codecs). Meaning, even though HEVC has less details of the video as compared to x264, when played (decoded) it will display the same quality as that of when x264 is played (decoded), right? Taking the same example in audio, how can you say that AAC (lossy) -> OPUS (lossy) is always going to end in quality decrease? If that is true (for lossy encoding), then HEVC, a lossy encoder, shall never be able to retain the same quality as that of x264 if we do x264 -> HEVC, no? Secondly, let's say that lossy -> lossy will always result in quality decrease and encoding done below uses 128k bitrate. AAC -> OPUS | 5% quality reduction (maybe) OPUS -> AAC | 5% quality reduction (maybe) What if we do something like this? AAC -> OPUS -> AAC -> OPUS -> AAC ... and so on. Since each step decreases the quality by 5% and if we do our math, at the end of 10th round the final file should have a total quality reduction of 40.13% and till 100th round, the final file should have a quality reduction of 99.41%. Which means the file at 100th round will have a quality of 100 - 99.41 = 0.59% as compared to original file and should be either inaudible or complete garbage.

zeust (uploader)

User
I hope we agree on the fact that in reality it is not the case. I know the word **lossy** sounds as if it's gonna lose the quality no matter what but if we control the quality parameters, we can retain the quality. Lossy encoding is NOT an ever-decreasing quality process. If it was, the above example would hold true. If you're suspicious, we can even decrease the reduction to 0.1% in each step and we'll get a total quality reduction of 63.23% at the end of 1000th round. If lossy -> lossy is going to decrease quality REGARDLESS of what quality parameters we use in encoding, there will come a point when the end file is complete garbage as compared to original. This could be achieved ONLY if we keep on using a degrading quality parameter (like lower bitrate or compression ratio in each step). Another counter to above example is that after a couple of steps, the encoder knows that there is nothing to be trimmed off from the source because it's already efficient. Sounds plausible and that's why I switched the encoding between OPUS -> AAC -> OPUS in each step so there's gotta be quality decrease if lossy encoding is indeed an ever-decreasing quality process. But even if we agree with the fact that encoder can decide when not to trim anything from the source, it means the encoders, be it video or audio, are aware of acheiving the absolute minimum threshold to keep so the quality (asked by quality parameters) can be achieved. Which means in case of even the first step, AAC (lossy) -> OPUS (lossy), OPUS can decide to retain the quality if quality parameters are given correctly. Same goes for x264 (lossy) -> HEVC (lossy). HEVC can attain the same quality as that x264, even in less size, given that quality parameters are given correctly. **I'm not defending my release, at this point. I'm just arguing the misconception that lossy -> lossy will always decrease quality. If you don't agree, I'd rather see some data than mere statements that can't be backed up with data / tests.** P.S: Sorry for such long comments.
### [Generation loss](https://en.wikipedia.org/wiki/Generation_loss) --- From Wikipedia, the free encyclopedia # https://www.youtube.com/watch?v=JR4KHfqw-oE
Best to just leave the audio alone. The audio is already low quality as is being web or whatever it was broadcasted in. Destroying more of what's left makes no sense. Also, your statement that a opus 128 kbps is on part with 256 AAC... I am not even going to entertain how ridiculous that statement is. If you can't understand how lossy to lossy is bad it's pointless of writing anything further. Lately it seems the folks who've been using Opus and advocating for its usage have no idea what they're talking about. Got one guy who thinks encoding opus audio to 96 kbps for all their anime encodes regardless of series thinks that's legit. Another guy believes opus is better than AAC because of listening test results based on low 96 kbps or less from... 2014. But in his eagerness has no idea that at higher kbps it's transparent. Opus is only good at lower bitrates nothing else (plus being cut off at 20 kHz doesn't help it either) but you won't be getting CD quality audio at 128 kbps.

zeust (uploader)

User
@The0x539 I hope people actually read before labeling an entire article to reason their statements. The article says following. Repeated applications of lossy compression and decompression **can** cause generation loss, **particularly if the parameters used are not consistent across generations.** Ideally an algorithm will be both idempotent, meaning that if the signal is decoded and then re-encoded with identical settings, there is no loss, and scalable, meaning that if it is re-encoded with lower quality settings, the result will be the same as if it had been encoded from the original signal – see Scalable Video Coding. More generally, transcoding between different parameters of a particular encoding will ideally yield the greatest common shared quality – for instance, converting from an image with 4 bits of red and 8 bits of green to one with 8 bits of red and 4 bits of green would ideally yield simply an image with 4 bits of red color depth and 4 bits of green color depth without further degradation. **Some lossy compression algorithms** are much worse than others in this regard, being neither idempotent nor scalable, and introducing further degradation if parameters are changed. --- All it means that it may result in reduced quality if parameters are not used correctly. As for the youtube link, that's their only logic. If I get some time, I'll do the 1000th audio encoding scenario mentioned above and will share the results but I doubt you, or anyone, will change their mind. In case of confusion, let me state again. I'm not claiming / advocating that lossy -> lossy encoding is completely ok. But that if done carefully, you can retain the quality. That's the very reason that x264 (lossy) -> HEVC (lossy) can still retain the same quality, even in lower filesize. Of course, the data in HEVC resultant output file will be less as compared to x264 file but that doesn't **necessarily** mean quality has decreased. It depends upon the quality parameters used in HEVC encoding. Am I missing something here?

zeust (uploader)

User
@noZA_ can you explain how the audio was destroyed or reduced in quality? Can you point me to anywhere in the entire audio where we all can hear the difference? You're not gonna entertain the quality retention feature of OPUS even if proved by [thorough analysis](http://opus-codec.org/comparison/). Let me guess, you're not gonna back up your own claims with any thorough analysis or testing, right? It's cool. I do understand the point being made, however. If the source was FLAC and I had encoded it down to 128-256k OPUS / AAC, I'd understand the effects of lossy compression. But I don't think the claim that the process of AAC (128/256k) -> OPUS (128k), **done with good quality parameters,** has decreased the quality. If so, let's see the tests done by the community.
> If you can’t understand how lossy to lossy is bad it’s pointless of writing anything further. As I already wrote before these opus supporters demonstrate further just how nonsensical their words get. The more they write the more they expose themselves to their irrational logic. How one can understand going from lossless flac to lossy compression but lose it when going AAC 256 kbps to opus 128 kbps and believe no quality is being lost? This is something else. If this is your way of advocating for opus in a weird way to be more widely used by anime encoders this is definitely the wrong way to go about it because you have no idea what you're talking about.

zeust (uploader)

User
I wasn't advocating OPUS but trying to make sense of lossy compression. Neither was my aim to make it a re-encode release. Having said that, I did the 1000 round audio encoding which I mentioned above, and here are the [audio files](https://mega.nz/folder/kVskjQib#hI3Blt1iOnWgTo3YT8nH7w), in case anyone's curious. Clearly, my premise was wrong if you listen to the audios i.e. lossy -> lossy is indeed an ever-decreasing quality process. Though, you can control it to some extent but quality does decrease nevertheless. Beware, the 500+ audios sound like demon possession, if you bother to listen. I kept the same parameters throughout and even though size only fluctuated from 1.5 MiB (first audio) to 1.3 MiB (1000th audio) the quality plummeted rather rapidly. Now that I have seen the actual demonstration, it does prove the fact that my AAC -> OPUS encoding introduced quality reduction, however negligible it may be. In short, there haven't been many people this wrong in the history of encoding discussions. I was keeping true to data and test runs instead of blindly complying with whatever was said. Maybe, should've tested this before all this argument. Anyway, this also ensues a v2 release for this episode. Will do it later after EP.06.
People already did the work and provided information on this, so you pretty much wasted your time. Anyways, it will probably be better to provide a patch that simply changes the audio (and if you find any tweaks you want to add to the script) rather than forcing folks to download the entire 700+ meg file again.
@zeust Don't take this the wrong way, but you have yet something to learn in this regard. Lossy encoding is guaranteed to degrade the signal, whether or not it will be an audible degradation, which is why it's called lossy to begin with. Part of that is due to the fact that the algorithm doesn't (and cannot) know whether the file was previously encoded in any way, so any and all methods involved in saving space will be applied anew, just as aggressively as the settings command it to. Before an algorithm encodes a signal, it discards that which it will not bother with, which in most cases means high-frequency content. The rest of the data is also subject to irreversible information loss. The loss won't result in the lack of signal; instead, depending on the algorithm, it will manifest as a *distortion* of signal, or just plain noise. The encoder will make complex decisions on how to preserve the data so that you don't notice the loss; the settings mainly dictate how obvious it will be. The masking involves psychoacoustic models which reduce fidelity in places where it's harder for an untrained ear to detect artifacts (such as immediately following sharp or loud sounds, or in very low-amplitude parts). If a single encoding pass would put an artifact there that may not be very noticeable, then subsequent transcoding will either magnify the artifact or introduce one elsewhere the psychoacoustic model considered more appropriate. Every artifact, every distortion effectively increases the amount of data to encode compared to the original file. And most of the time they don't compress as well as the original signal due to their inherently chaotic nature (analogy: you can express a graph depicting a parabola as a simple mathematical function, but throw some irregularities in that graph, and suddenly it becomes very hard to represent succinctly in text form).
Now, for the practical aspect, since we don't perceive sound discretely (e.g. it's not possible to hear it frame by frame as you would do with an image), it's not as easy to demonstrate degradation as it would be with e.g. a series of jpegs where each is a recompressed version of the previous one. Being able to hear the difference will, to a high extent, depend on your experience and hardware (which is why, ideally, you should always account for people with more experience and/or better hardware). But if you want to look at objective data, it's not too hard: take a high quality, losslessly encoded file (1), encode it once (2), save the result, then transcode the encoded file with the same setting (3). Open the three files in the audio editor like Audacity or Sound Forge. Ensure they have the exact same length down to a sample. Superimpose files 1+2, 2+3, and 1+3 while inverting the phase as needed, and save the results: this will yield (mostly very quiet) files with the pure sonic difference between them. (This is also how noise-canceling headphones work, btw: they have microphones on the back side of the speaker body which record the outside noise, invert its phase, and feed it into the headphone output.) Listen to and look at the difference. In case with 1+2, you will note some high-frequency content cut off by the low-pass filter, and some warbling. With 2+3 and 1+3 there will be more warbling, and you will make out more of the original sound. 1+3 should be more audible than 1+2, with higher absolute amplitude and more changes happening in the audible range. This is not a perfect method since it mixes together both that which was cut out and that which was introduced into the file, but it should provide some illustrative value at the least. Alternatively, you can conduct an ABC/HR test which is the best for this kind of comparison.