All of this user’s content is licensed under CC BY 4.0.

  • 12 Posts
  • 156 Comments
Joined 1 year ago
cake
Cake day: October 20th, 2023

help-circle


  • You use too many products, no way that can be good for your skin. Even showering every day is imo unnecessary, once every other day or once a week is good enough if the only thing you did was sitting in an office all day. And if you do shower that often, most of the time you should only use water, not any other products.

    Are you only here to spread negativity, or do you have any proof behind your claims? I’m not interested in opinions.



  • So, I bought an EasyCap device and ran some tests. I encountered a few things that I don’t quite understand, and I would really appreciate your input!

    I used a test VHS tape that I purchased at a thrift store (I’m not 100% sure if it’s NTSC or PAL, but I’m decently confident that it’s NTSC) (I’m also not sure what its aspect ratio is — I think it’s either 1.33:1 or 4:3). I’m playing the tape in a PV-D4745S-K VCR. I have the composite out of the VCR going into the aforementioned capture device which is connected to a computer running Arch Linux.

    First, I used the following ffmpeg capture settings:

    ffmpeg -i /dev/video2 out.mkv
    

    After capturing a short snippet of the test tape, I probed its metadata with ffprobe -i out.mkv, and saw that it was strangely in 25FPS, and 720x576 (which caused the video to be stretched vertically slightly), which is PAL. So, somehow the NTSC VHS being played in an NTSC VCR was being converted to PAL. In addition to that, the colors in the video were very overexposed. I tried a bunch of different manual settings like specifying interlacing with -vf "interlace", -standard ntsc, -vf scale=720:480, -vf fps=29.97, -standard NTSC, and none of them solved the issue. I then came across this answer on StackOverflow which stated that capture cards themselves have settings for the video feed, and ffmpeg can modify them with the -show_video_device_dialog true option. From the ffmpeg documentation:

    show_video_device_dialog

    If set to true, before capture starts, popup a display dialog to the end user, allowing them to change video filter properties and configurations manually. Note that for crossbar devices, adjusting values in this dialog may be needed at times to toggle between PAL (25 fps) and NTSC (29.97) input frame rates, sizes, interlacing, etc. Changing these values can enable different scan rates/frame rates and avoiding green bars at the bottom, flickering scan lines, etc. Note that with some devices, changing these properties can also affect future invocations (sets new defaults) until system reboot occurs.

    Unfortunately, when trying this option, an error popped up saying that the option was unrecognized. After some digging, and prompting ChatGPT, I found that apparently that option is Windows only as it relies on Windows’ “DirectShow system”. The way to modify it in Linux is to use the Video4Linux2 framework, which is controlled with v4l2-ctl. So, I ran the following:

    v4l2-ctl --device=/dev/video2 --list-formats-ext
    

    which showed the following entry:

    ...
    [0]: 'YUYV' (YUYV 4:2:2)
        size: Discrete 720x480
    ...
            Interval: Discrete 0.033s (30.000 fps)
    ...
    

    So it is able to output NTSC — ie 720x480 at 29.97fps (I guess it rounds up the fps for whatever reason). So I then tried

    ffmpeg -f v4l2 -video_size 720x480 -i /dev/video2 out.mkv
    

    and it was able to output the video at 720x480 29.97 fps as desired, and the colors were no longer super overexposed. Using the -vf "interlace" flag, I do seem to also be able to capture interlaced video, so it also doesn’t force progressive which is nice.

    I thought that the capture card would be able to just autodetect what the input resolution was to allow ffmpeg to record at that, or at the very least, I would expect that specifying NTSC in ffmpeg would force the standard, but neither of those worked and I’m not sure why. There’s also still an ongoing issue of the video being zoomed in/cropped slightly (I verified this by comparing against online sources of the same video (some were a VHS rip, others from non-VHS sources)). I tested the VCR’s output on a regular TV, but unfortunately the TV forced 4:3 and cropped it even more, so I wasn’t able to make a perfect comparison, though that was only additional horizontal cropping — the vertical cropping from before was still present. To be able to verify that, I’ll have to pick up another test VHS tape to see if perhaps the test VHS tape that I currently have was just recorded in a cropped format.


  • It seems to be an EasyCAP clone, there are several devices in this form factor with different chipsets.

    Good to know! That link has a lot of good information.


    This capture device seems to be labeled as “BR116” based on photos in reviews, which can help identifying the chipset. BR116 is sold by Conrad and its manual by them mentions “STK1160” in a screenshot, so this Amazon one most likely also uses the STK1160 chip, which was one of the worst ones in this timebase stability test (which means it has no TBC). However, it’s alright if your VCR is a late model that already does TBC internally.

    Noted! I will keep this in mind.


    I came across this video about digitizing VHS tapes [1]. It talks about hardware to use, and hardware to avoid [1.6]. One of the examples that it gives for hardware to avoid seems to be a clone of the device that I was looking at on Amazon [1.2]. The rationale for why it should be avoided was that it doesn’t pass both fields of the interlaced video through independently [1.1]. Though, you have mentioned that it’s fine to capture the video interlaced, so perhaps this isn’t a big deal-breaker. The capture cards that the video recommends are:

    • IO-Data GV-USB2 [1.3]
    • StarTech.com SVID2USB232 [1.4]
    • Dazzle DVC-100 v1.1 [1.5]
    References
    1. “How to convert VHS videotape to 60p digital video”. The Oldskool PC. YouTube. Published: 2023-02-07. Accessed: 2024-09-14T21:09Z. https://www.youtube.com/watch?v=tk-n7IlrXI4.
      1. T00:03:56
      2. T00:04:08
      3. T00:04:38
      4. T00:04:59
      5. T00:05:19
      6. T00:03:50

  • Get an actual composite capture card for the job.

    Ha, honestly, I wish that I would’ve done this to begin with. It’s way cheaper, and simpler to get the one composite capture card rather than converting composite to HDMI, then capturing HDMI. I’m honestly not entirely sure why I did the latter — perhaps it’s because I was under some presumption that such a device wouldn’t exist (which, now, I realize is an obviously silly assumption to make). I found this one. It’s still just a generic capture card, but it’s a direct composite capture. Do you think that it would suffice?


  • Check that the output is indeed interlacd

    Is it possible to see this in OBS? I see an option to select an interlacing technique if I right click the scene


    Look at stats/logs to see of any frames are dropped and investigate if it’s just the 59.94 Hz compensation

    Are you referring to “stats/logs” within OBS?


    make sure to disable auto-gain or else quiet sections will get boosted like crazy, increasing the noise.

    If you are referring to a toggle on the capture card or the converter, neither have a button for that, so I think my setup is fine in that regard?


  • This was very informative! Thank you for your comment!


    you should check that the video output is actually at [59.94 Hz]

    How does one measure the input frequency of the video feed? I’m not aware of OBS being able to monitor the frequency/refresh-rate of individual input devices, but I could certainly be wrong.


    Don’t use the converter if it cannot output 480i or at the very least 480p! Scaling should happen during playback, the files should be original resolution.

    I looked on Amazon again, and it seems that every converter being sold only outputs 720p, or 1080p — none of them simply repeat the input resolution, eg 480p or 480i. Would you have a converter in mind that would accomplish this?


    I’d just clean the VCR after every tape if I suspect mold. You’d still need to clean the cleaning VCR after every tape to avoid cross-contamination

    Do you have any resources that you would recommend for proper cleaning of a VCR?


  • Why a separate VCR for cleaning tapes?

    I was just thinking that the cleaning process might damage the VCR (as one is rummaging around in its internals [1]), so it’d be better to use a worse quality VCR for cleaning, and a good quality one for digitization.

    References
    1. “How to Clean a Moldy VHS Tape”. Dustin Kramer. YouTube. Published: 2016-04-24. Accessed: 2024-09-10T18:49Z. https://www.youtube.com/watch?.v=uVq0o2CzVKI

    you should definitely not use default deinterlacing techniques for the video

    What “default deinterlacing techniques” are you referring to?


    you should […] especially not [use deinterlacing techniques] built into these generic dongles

    How do I find out that information for the 2 things that I purchased (mentioned in the post)? How would I even control that? Only the composite to HDMI converter has a single switch from 720p to 1080p. I don’t see anything else that would control what interlacing technique is used.


    Capture [the video] interlaced, preferrably as losslessly as possible

    What method do you recommend to accomplish this?


    use deinterlacing software where you can fine-tune the settings if you need to.

    Is this possible in OBS?


    TBC can obviously be done in software if you have the raw composite or head signal but that is not possible with the capture cards you have.

    If I did want to capture the raw signal, do you have any methods and/or tools that you would recommend to accomplish this?





  • When I use a website as a source, at the time that I access it for information, I will also save a snapshot of it in the Wayback Machine. Ofc theres no guarantee that the Internet Archive will be able to survive, but the likelihood of that is probably far greater than some random website. So, if the link dies, one can still see it in the Wayback Machine. This also has the added benefit of locking in time what the source looked like when it was accessed (assuming one timestamps when they access the source when they cite it).



  • The copy/paste ctrl-c and ctrl-v keyboard shortcuts are also a lot less convenient but I just deal with it.

    Thankfully, these were only shifted one to the right in Workman.


    It’s also annoying having to rebind keys in pretty much every keyboard-heavy game.

    Yeah I’ve gotten used to that. I sometimes will do a software switch in the OS back to QWERTY if I’m playing games (my layout is determined by the OS setting rather than hardware) so that I don’t have to rebind, but it doesn’t always seem to work. At the very least, I don’t think you can do a layout switch while the game is running. Some games also appear to intercept raw keyboard codes rather than what’s being sent by the OS so they ignore the software keyboard layout anyways.


  • Pet theory: most Dvorak users were, in their pre-enlightenment lives, messy freestyle 3-finger typists.

    Given that Dvorak tries to maximize alternating hands when typing consecutive characters [1], that theory definitely feels plausible given that the “hunt-and-peck” style for typing naturally seems to work with alternating hands. I think the same idea could also be applied to mobile typing as you only have two thumbs — perhaps Dvorak would lend itself well to mobile typing?

    References
    1. “Dvorak keyboard layout”. Wikipedia. Accessed: 2024-08-10T23:00Z. https://en.wikipedia.org/wiki/Dvorak_keyboard_layout#Overview

    Letters should be typed by alternating between hands (which makes typing more rhythmic, increases speed, reduces error, and reduces fatigue).


    If you ever went to the trouble of formally learning to touch-type Qwerty, moving to another layout just seems impossibly foreboding.

    It’s not that bad. By my experience, having gone from QWERTY to Dvorak to Colemak to Workman, it takes maybe an hour to memorize the keys, then it’s just a matter of practicing by using it. You will progressively get faster and faster as it becomes second nature. To get to full typing speed and for it to feel completely natural, however, it will likely take a month, depending on how often and how much one types.

    Something interesting that I noticed, though, is that it seems that the brain is only to be able to know one keyboard layout well at a time. If I learn a new layout, I don’t maintain my skill with the previous layout minus the skill lost due to lack of practice. It almost feels entirely zero-sum. As I gain skill in one keyboard layout, I seem to equally lose skill in the previously known keyboard layout. I do try and maintain some level of proficiency with QWERTY, given that it is still the standard and is the most common, but it takes considerably more effort. It seems to be less acquiring a new skill and more rewiring the brain.






  • All of the services that I host are for private use:

    • Nextcloud
    • FreshRSS
    • Immich
    • Jellyfin
    • RSSBridge

    And they are all behind Caddy, which reverse proxies and handles HTTPS. I’m not sure if it really counts as self-hosting, but I also use my server as a host for my backups with Borg. I also use it as a sort of central syncing point for Syncthing.

    I did have a Pi-Hole at one point, but I kept running into issues with it — I may look into it again in the future.

    At some point I’d like to try implementing some ideas that I’ve had for Homeassistant (a camera server with Frigate and some other automation things). Once federation has been implemented I would like to host a Forgejo instance. I may also host a Simplex relay server, depending on how the app progresses. I’ve been considering hosting a Matrix instance, but I’m not sure yet.