fatboy93

joined 2 years ago
[–] [email protected] 1 points 2 years ago* (last edited 2 years ago) (9 children)

Probably if you split tunnel the vpn connection from mullvad to your torrent application and not run the vpn for the entire laptop's network stack this could be done.

Alternatively, dockerize the entire vpn+torrent (+jellyfin) setup? That way the container gets the vpn but you can still access using your host ip for jellyfin.

[–] [email protected] 1 points 2 years ago

Absolutely wild! My close to a decade old thinkpad is now our htpc. Still plenty fast too!

[–] [email protected] 13 points 2 years ago

While I really get what you're saying, the unfortunate situation is that they are always a secondary fiddle to money

[–] [email protected] 10 points 2 years ago* (last edited 2 years ago) (1 children)
[–] [email protected] 5 points 2 years ago

Just don't get xiaomi if you're in the US. I used to have one when I moved from India last year and majority of the bands were unsupported, so I was stuck on 2G-3G speeds.

[–] [email protected] 3 points 2 years ago

I'd love to have a compact list option as well!

[–] [email protected] 4 points 2 years ago

Just saw this post a few minutes back and downloaded it.

Not sure if its my phone (samsung s22+) or something, but comments keep dropping and I need to reopen the post for the comments to show up.

[–] [email protected] 3 points 2 years ago

I've started forwarding the calls to Google voice or if i accidentally pick the call, i use bixby's auto answer.

Either eay the call drops in a hot second.

[–] [email protected] 3 points 2 years ago

I fell in love with the webapp first and now I like the android native app!

[–] [email protected] 3 points 2 years ago

Ofcourse you'd want to lump in priests with teachers.

[–] [email protected] 8 points 2 years ago

Boost gives me so much joy to use.

[–] [email protected] 0 points 2 years ago* (last edited 2 years ago)

I'm just going to cheat here a bit and use chatGPT to summarize this, since I don't want to do the calculation wrong. Hope it makes sense. I'm just excited to share this!

########## Integrated GPU #########

Total inference time = Load time + Sample time + Prompt eval time + Eval time

Total inference time = 26205.90 ms + (6.34 ms/sample * 103 samples) + 29234.08 ms + 118847.32 ms

Total inference time = 26205.90 ms + 653.02 ms + 29234.08 ms + 118847.32 ms

Total inference time = 174940.32 ms

So, the total inference time is approximately 174940.32 ms.

########## Discrete GPU 6800M ######### Total inference time = Load time + Sample time + Prompt eval time + Eval time

Total inference time = 60188.90 ms + (3.58 ms/sample * 103 samples) + 7133.18 ms + 13003.63 ms

Total inference time = 60188.90 ms + 368.74 ms + 7133.18 ms + 13003.63 ms

Total inference time = 80594.45 ms

So, the total inference time is approximately 80594.45 ms. #####################################

Taking the difference Discrete - Integrated : 94345.87 ms.

Which is close to about 53% faster or about 1.5 minutes faster. The integrated GPU takes close to 175 seconds and the discrete finishes in about 81 seconds.

I do think that adding more RAM at some point could definitely help in improving the loading times, since the laptop has currently about 16Gb RAM.

view more: ‹ prev next ›