• 11 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle






  • Well, the thing is, if the physics or render steps (not necessarily the logic) don’t advance there’s no change in the world or the screen buffer for the pc to show you, so, that’s what those frame counters are showing, not how many frames the screen shows, but how many frames the game can output after it finishes its calculations. You can also have a game running at 200 frames but your screen at 60.

    So, when someone unlocks the frame rate probably they just increased the physics steps per second which has the unintended consequences you described because the forces are not adjusted to how many times they’re being applied now.

    And a bit yeah, if you know your target is 30fps and you don’t plan on increasing it then it simplifies the devlopment of the physics engine a lot, since you don’t have to test for different speeds and avoid the extra calculations to adjust the forces.


  • Oh, I was only aware of credits where the lender sets the amount to be the total exactly spread over the period, those are the only ones I’ve seen and taken, so each month I get a charge for the amount needed to keep up with the credit.
    For the rest then it makes sense how they make money, since I’ve had credit cards which don’t show or at the very least hide the amount to not pay interest and only tell you the minimum payment.








  • it just seems to redirect to an otherwise Internet accessible page.

    I’m using authelia with caddy but I’m guessing it could be similar, you need to configure the reverse proxy to expect the token the authentication service adds to each request and redirect to sign in if not. This way all requests to the site are protected (of course you’ll need to be aware of APIs or similar non-ui requests)

    I have to make an Internet accessible subdomain.

    That’s true, but you don’t have to expose the actual services you’re running. An easy solution would be to name it other thing, specially if the people using it trust you.
    Another would be to create a wildcard certificate, this way only you and those you share your site with will know the actual sub domain being used.

    My advice is from my personal setup, but still all internal being able to remotely access it via tailscale, so do you really need to make your site public to the internet?
    Only if you need to share it with multiple people is worth having it public, for just you or a few people is not worth the hassle.





  • I sort of did this for some movies I had to lessen the burden of on the fly encoding since I already know what formats my devices support.
    Just something to have in mind, my devices only support HD, so I had a lot of wiggle room on the quality.

    Here’s the command jellyfin was running and helped me start figuring out what I needed.

    /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -f matroska,webm -autorotate 0 -canvas_size 1920x1080 -i file:"/mnt/peliculas/Harry-Potter/3.hp.mkv" -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:0 -codec:v:0 libx264 -preset veryfast -crf 23 -maxrate 5605745 -bufsize 11211490 -x264opts:0 subme=0:me_range=4:rc_lookahead=10:me=dia:no_chroma_me:8x8dct=0:partitions=none -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -sc_threshold:v:0 0 -filter_complex "[0:3]scale=s=1920x1080:flags=fast_bilinear[sub];[0:0]setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale=trunc(min(max(iw\,ih*a)\,min(1920\,1080*a))/2)*2:trunc(min(max(iw/a\,ih)\,min(1920/a\,1080))/2)*2,format=yuv420p[main];[main][sub]overlay=eof_action=endall:shortest=1:repeatlast=0" -start_at_zero -codec:a:0 libfdk_aac -ac 2 -ab 384000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/var/lib/jellyfin/transcodes/97eefd2dde1effaa1bbae8909299c693.m3u8"
    

    From there I played around with several options and ended up with this command (This has several map options since I was actually combining several files into one)

    ffmpeg -y -threads 4 \
    -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda \
    -i './Harry Potter/3.hp.mkv' \
    -map 0:v:0 -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0 \
    -map 0:a:0 -map 0:a:1 \
    -fps_mode passthrough -f mp4 ./hp-output/3.hp.mix.mp4
    

    If you want to know other values for each option you can run ffmpeg -h encoder=h264_nvenc.

    I don’t have at hand all the sources from where I learnt what each option did, but here’s what to have in mind to the best of my memory.
    All of these comments are from the point of view of h264 with nvenc.
    I assume you know who the video and stream number selectors work for ffmpeg.

    • Using GPU hardware acceleration produces a lower quality image at the same sizes/presets. It just helps taking less time to process.
    • You need to modify the -preset, -profile and -level options to your quality and time processing needs.
    • -vf was to change the data format my original files had to a more common one.
    • The combination of -rc and -cq options is what controls the variable rate (you have to set -b:v to zero, otherwise this one is used as a constant bitrate)

    Try different combinations with small chunks of your files.
    IIRC the options you need to use are -ss, -t and/or -to to just process a chunk of the file and not have to wait for hours processing a full movie.


    Assuming that I have the hardware necessary to do the initial encoding, and my server will be powerful enough for transcoding in that format

    There’s no need to have a GPU or a big CPU to run these commands. The only problem will be the time.
    Since we’re talking about preprocessing the library you don’t need real time encoding, your hardware can take one or two hours to process a 30 minutes video and you’ll still have the result, so you only need patience.

    You can see jellyfin uses -preset veryfast and I use -preset p7 which the documentation marks as slowest (best quality)
    This is because jellyfin only process the video when you’re watching it and it needs to process frames faster than your devices display them.
    But my command doesn’t, I just run it and whenever it finishes I’ll have the files ready for when I want to watch them without a need for an additional transcode.


  • I think you have two options:

    1. Use a reverse proxy so you can even have two different domains for each instead of a path. The configuration for this would change depending on your reverse proxy.
    2. You can change the config of your pihole in /etc/lighttpd/conf-available/15-pihole-admin.conf. In there you can see what’s the base url to be used and other redirects it has. You just need to remember to check this file each time there’s an update, since it warns you it can be overwritten by that process.

  • Just did the upgrade. Only went and copied the docker folder for the volume.

    # docker inspect immich_pgdata | jq -r ".[0].Mountpoint"
    /var/lib/docker/volumes/immich_pgdata/_data
    

    Inside that folder were all the DB files, so I just copied that into the new folder I created for ./postgres

    I thought there would be issues with the file permissions, but not, everything went smoothly and I can’t see any data loss.
    (This even was a migration from 1.94 to 1.102, so I also did the pgvecto-rs upgrade)




  • I’m not sure what you mean by articles not loading properly.
    I haven’t had any issues with FreshRSS’ UI showing all the data.

    Have you checked the feed sends all the article in it?
    For example ars’ feed sends a few paragraphs and includes a link at the end with Read the remaining X paragraphs
    404media’s does send all the article content in their feed.
    9to5google’s only send you a single line from the article!!

    So, it depends on what you need.

    If you want to see the full content probably you need an extension which either curls the link of each item in the feed and replaces the content received by the feed with the one received bu the curl, or one which includes an iframe with the link so the browser loads it for you.
    IIRC there are two youtube extensions which do something similar to change the links for invidious or piped, one replaces the content with the links, and the other adds a new element to load the video directly in the feed.


  • The port for your postgres container is still the same for other containers, what you did was just map 5432 to 8765 for your host.

    You don’t need to change the port or the host the immich services try to access within the network docker compose creates. You still have container_name: immich_postgres so you didn’t change anything for the other containers.

    What you did was change how to write the command to up or down the container. From docker compose up database to docker compose up immich-database (which normally you won’t use since you want to up and down everything at once).
    If you do docker ps you’ll still see the name of the container is immich_postgres