ulterno

joined 2 years ago
[–] [email protected] 0 points 11 months ago

As a treble lover, I tend to have problems with low bitrate and lossily compressed stuff.
But from what I have ~~seen~~ heard, as long as the quanta are fine enough, the resultant regenerated audio tends to be close enough to the original. Of course, the components of the sound card matter, when you get to extreme clarity levels, but I guess my ears are not fine enough for that.

[–] [email protected] 1 points 11 months ago

I usually try to stay away from any of those.
Just that this time, they decided to use the legal system to suppress Wikipedia, which is why I thought, this needed to be shared.

Normally, I don't even care about checking Wikipedia for controversial topics.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago) (17 children)

The content is... AI assisted (maybe a better way to put it).
And yes, now you don't need to get the VA every time you add a line, as long as the License for the TTS data holds.

You still want to be having proper VAs for lead roles though. Or you might end up with empty feeling dialogues. Even though AI tends to put inflections and all, from what I have seen, it's not good enough to reproduce proper acting.
Of course that would mean that those who cannot do the higher quality acting ^[e.g. most Anime English dubs. I have seen a few exceptions, but they are few enough to call exceptions] will be stuck with only making the TTS files, instead of getting lead roles.

But that will mean that now, places where games could not afford to add voice, they now can. Specially useful for cases where someone is doing a one dev project.

Even better if there can be an open standard format for AI training compatible TTS data. That way, a VA can just pay a one time fee to a tech, to create that file, then own said file and licence it whichever way they like.

[–] [email protected] 1 points 11 months ago

I've seen people good at typing on a touch screen and they do so, astonishingly well. I myself, am not able to type on touch well enough and just use swype instead (despite the frustration).

[–] [email protected] 6 points 11 months ago* (last edited 11 months ago) (1 children)

Swype typing can get pretty fast tbh. But that greatly depends upon the software.
Despite the hate it got, Windows Phone's default keyboard had a far superior swype experience as compared to Android and iOS. Probably because they didn't try to inculcate all user words into their dictionary and used the sentence structure as a reference to rank the predicted words.

Had this one been OSS, it would have been a great service. But now it has been scrapped along with the rest of Windows Phone. One of the reasons why I hate to think of what would happen to any high effort thing I make in a company.

[–] [email protected] 22 points 11 months ago (2 children)

with a smartphone in your hand

They are probably better at touch^[as in touchscreen :P] typing.

[–] [email protected] 1 points 11 months ago

Well guess what? That was an e-book and you only get to read once.

[–] [email protected] 11 points 11 months ago (19 children)

I just re-read my comment and realised I was not clear enough.
You bundle the text and the AI-TTS. Not the AI text generator.

[–] [email protected] 1 points 11 months ago* (last edited 11 months ago)

So it's summer season. Was it supposed to be a bit to the North, or a bit to the south? Or a lot to the north? until when is it considered summer anyway? October? November? It's still pretty hot out there.

The last time that happened, I was trying to hurry back home, before the rain started pouring. And guess what? I couldn't see the sun.

[–] [email protected] 1 points 11 months ago

You'll need an extra feat to add the time delayed parameter selection metamagic.
Or you could just imbue a nino-magatama with it and give it to a fellow wizard to use it on you.

[–] [email protected] 3 points 11 months ago

One can dream.

[–] [email protected] 22 points 11 months ago (27 children)

A really good place would be background banter. Greatly reducing the amount of extra dialogues the devs will have to think of.

  1. Give the AI a proper scenario, with some Game lore based context, applicable to each background character.
  2. Make them talk to each other for around 5-10 rounds of conversation.
  3. Read them, just to make sure nothing seems out of place.
  4. Bundle them with TTS for each character sound type.

Sure, you'll have to make a TTS package for each voice, but at the same time, that can be licensed directly by the VA to the game studio, on a per-title basis and they too, can then get more $$$ for less work.

view more: ‹ prev next ›