• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle



  • This sounds like a timing issue to me. The thread bunching up may be due to the hook not grabbing the thread or the take-up lever not taking up the slack at the correct time. If it’s missing stitches in zig-zag mode then that would also be due to either hook timing or possibly needle bar alignment.

    Simple things to check:

    • Make sure that the needle is installed correctly, especially that it is oriented the right way and inserted all the way in

    • Make sure that the take-up lever is threaded correctly

    Assuming these are both correct, you can try the following:

    • If possible, insert a fresh needle (at least, you will need a needle that is undamaged and not bent from the shank up to the eye)

    • Remove the plate, leave the machine unthreaded

    • On the straight stitch setting, turn the hand wheel slowly and check that the eye of the needle is exactly level with the hook as they pass each other (this should happen close to the bottom of the needle’s stroke but may not be exactly at the bottom)

    • On the widest zig-zag stitch setting, again turn the hand wheel slowly and check that the eye of the needle passes closely to the hook (it won’t be exact because the needle has moved, but it should be just slightly early on one side and just slightly late on the other, not noticeably early or late on one side) and also check that the needle is not colliding with any solid parts of the machine on either side

    If the eye and the hook are not aligned as they pass each other, then you have either a timing or a needle height alignment issue. If they pass correctly on the straight stitch but the needle is noticeably early or late on one side of the zig-zag stitch (and fine on the other side) then you have an issue with the horizontal alignment of the zig-zag stitch.


  • That machine is a pretty solid choice if it works, and a worthwhile repair project if it doesn’t (it may have seized up if not maintained recently or it may have timing or alignment issues from age).

    Machines like that are quite solidly built compared to modern machines, I would be surprised if it can’t get through a few layers of denim for a few stitches (I wouldn’t recommend doing 6 layers continuously, but crossing over the side seam should be OK). If you’re concerned you can always hand crank it for that part.

    The lack of a free arm may be somewhat limiting for hems. The “stupid” solution would be to stand the machine up on top of a crate or similar, as long as the circumference of the leg/other fabric is large enough to fit around the bottom metal “plate” of the machine. (These machines have a metal body designed to be built into a cabinet or shelf top. I’m not sure if yours includes a wooden box around the bottom or if it is just the machine itself, but if there is any wood then the machine can be removed from this leaving just the metal body of the machine itself which may provide more flexibility in this regard.)


  • I haven’t come across any significant discussion surrounding this before and I wouldn’t recommend choosing a machine on this basis.

    A front-loading bobbin is only an advantage for changing mid-task if you catch it before the thread runs out, otherwise you’ll be backtracking and starting again anyway once you’ve replaced it. I suppose if there is a viewing window and you can see when it is about to run out then this is an advantage, otherwise you won’t know when to stop and change it anyway until you notice that it has already run out.

    In terms of speed I doubt you will find any typical sewing machine “too slow” unless you plan to sew a lot and you want it finished quickly. For a few repairs or alterations and the occasional custom piece speed is not a priority, most of the time you will want to go slower anyway for more control/accuracy.

    I think you need to put less thought into what machine you get and more thought into getting some machine and start sewing without thinking so much about details like how the bobbin is loaded. As a beginner these things don’t matter, and by the time you are non-beginner enough for them to matter then you will know what aspects are important to you and if you want to upgrade. As it is, you can’t really jump to making “expert-level” choices because you don’t have the experience to know, for example, if speed is even a priority to you.



  • I tried getting it to write out a simple melody using MIDI note numbers once. I didn’t think of asking it for LilyPond format, I couldn’t think of a text-based format for music notation at the time.

    It was able to produce a mostly accurate output for a few popular children’s songs. It was also able to “improvise” a short blues riff (mostly keeping to the correct scale, and showing some awareness of/reference to common blues themes), and write an “answer” phrase (which was suitable and made musical sense) to a prompt phrase that I provided.


  • To be honest, the same could be said of LLaMa/Facebook (which doesn’t particularly claim to be “open”, but I don’t see many people criticising Facebook for doing a potential future marketing “bait and switch” with their LLMs).

    They’re only giving these away for free because they aren’t commercially viable. If anyone actually develops a leading-edge LLM, I doubt they will be giving it away for free regardless of their prior “ethics”.

    And the chance of a leading-edge LLM being developed by someone other than a company with prior plans to market it commercially is quite small, as they wouldn’t attract the same funding to cover the development costs.


  • IMO the availability of the dataset is less important than the model, especially if the model is under a license that allows fairly unrestricted use.

    Datasets aren’t useful to most people and carry more risk of a lawsuit or being ripped off by a competitor than the model. Publishing a dataset with copyrighted content is legally grey at best, while the verdict is still out regarding a model trained on that dataset and the model also carries with it some short-term plausible deniability.



  • TBH my experience with SillyTavern was that it merely added another layer of complexity/confusion to the prompt formatting/template experience, as it runs on top of text-generation-webui anyway. It was easy for me to end up with configurations where e.g. the SillyTavern turn template would be wrapped inside the text-generation-webui one, and it is very difficult to verify what the prompt actually looks like by the time it reaches the model as this is not displayed in any UI or logs anywhere.

    For most purposes I have given up on any UI/frontend and I just work with llama-cpp-python directly. I don’t even trust text-generation-webui’s “notebook” mode to use my configured sampling settings or to not insert extra end-of-text tokens or whatever.




  • text-generation-webui “chat” and “chat-instruct” modes are… weird and badly documented when it comes to using a specific prompt template. If you don’t want to use the notepad mode, use “instruct” mode and set your turn template with the required tags and include your system prompt in the context (? I forget what it is labeled as) box.

    EDIT: Actually I think text-generation-webui might use <|user|> as a special string to mean “substitute the user prefix set in the box directly above the turn template box”. Why they have to have a turn template field with “macro” functionality and then separate fields for user and bot prefixes when you could just… put the prefix directly in the turn template I have no idea. It’s not as though you would ever want or need to change one without the other anyway. But it’s possible that as a result of this you can’t actually use <|user|> itself in the turn template…



  • I haven’t got any experience with the 70B version specifically but based on my experience with LLaMa 2 13B (still annoyed that there’s no 30B version of v2…) it is more sensitive to promoting variations than other models as it isn’t specifically trained for “chat”, “instruct”, or “completion” style interactions. It is capable of all three but without using a clear prompt and template it can be somewhat random as to what kind of response you will get.

    For example, using

    ### User:
    Please write an article about [subject].
    
    ### Response:
    

    as the prompt will get results varying from a written article to “The user’s response to an article about [subject] is” to “My response to this request is to ask the user about [clarifying questions]” to “One possible counterargument to an article about [subject] is” to literally the text “Generating response, please wait… [random URL]”. Whereas most conversationally-fine-tuned models will understand and follow this template or other similar templates and play their side of the conversation even if it doesn’t match exactly what they were trained on.

    I would recommend using llama.cpp (or the Python binding) directly for more awareness of and control over the exact prompt text as seen by the model. Or using text-generation-webui in “notebook” mode (which just gives you a blank text box that both you and the LLM will type into and it’s up to you to provide the prompt format). This will also avoid any formatting issues with the chat view in text-generation-webui (again I don’t have any specific experience with LLaMa 2 70B but I have encountered times when models don’t output the markdown code block tags and text-generation-webui will mess up the formatting).

    Note that for some reason the difference between chat, instruct, and chat-instruct modes in text-generation-webui are confusingly named. instruct mode does not include an “instruction” (e.g. “Continue the conversation”) before the conversation unless you include one in the conversation template (the conversation template is referred to as “Instruction template” in the UI). chat-instruct mode includes an instruction such as “Continue the conversation by writing a single response for Assistant” before the conversation, followed by the conversation template. chat and chat-instruct modes also include text that describes the character that the model will speak as (mostly used for roleplay but the default “None” character describes a generic AI assistant character - it is possible that the inclusion of this text is what is helping LLaMa 2 stay on track in your case and understand that it is participating in a conversation. I’m not sure what conversation template chat mode uses but afaik it is not the same turn template as set in instruct and chat-instruct modes and I don’t see an option to configure it anywhere.