the-comfy-coder
comfy-blog
Guess that Sentence! (Genetic Algorithm)
the-comfy-coderThe full code can be found on my GitHubThe goal of this exercise is to make a genetic algorithm that can guess a target string. I will be using the following quote from The Hobbit: "If more of us valued food and cheer and song above hoarded gold it would be a merrier world "Organism This method generates a new organism object. It takes the goal string and makes a guess "value" string by randomly creating a string of the same length. There is also a constructor that just takes the goal string and a value string. Both goalString and value are class variables.Fitness This method determines the fitness level of an organism, based on the number of characters that match the goal string.Mate This method mates two organisms, creating two offspring returned in a list. The crossover point is randomly generated to be number between 1 and one less than the goal string length. The two child organisms are made by crossing over each parent at the point.Compare To This method overrides the compareTo method for Organisms so that they are compared by their fitness levels.Mutate This method mutates a string based on the mutation probability given by the user. This creates a probability of characters changing to other characters, in hopes that the target string is found. I have also modified the toString method to display organism data. Population This method creates a new population, setting the goalString, mutation probability, number of generations, and population size based on given input. The current generation is then created as a list of organisms, which are randomly generated to begin with. Iterate This method goes through all of the generations, either until you run out of generations or the goal string is found. For each generation, the most fit organism is allowed to mate with all other organisms in the population.Main This method allows the user to chose a string to guess, the number of organisms in each generation, number of generations, and the mutation probability. The algorithm runs until either the number of generations is met or until the goal string is found, whichever comes first.
Multiple Perceptron Neural Network
the-comfy-coderFull code can be found on my GitHubThis project aims to make a multi-perceptron neural network that recognizes the four regions of the plane to the right. It was done as a part of my CS620 AI Methods class taken in the fall of 2024.Initializing Weights I arbitrarily chose to have the weights initialize to a decimal between 0 and 1.0.Calculating Output This method takes the weights [][] and inputs[]. The length of the output array is set to the number of output neurons. Then for each output neuron i, the value of output[i] is set to 0. The weighted sum is then computed. The activation function is then applied, making the output[i] be set to 1 if the sum is more than 0 or 0 if the sum is less than 0. Once all of the output neurons have been calculated, the output array is returned.Updating the Weights This method modifies the weights based on the error between the target value and the output. The formula used is w[j][i]←w[j][i]+Λ⋅e[i]⋅x[j]Training the NN This method trains the NN to be able to determine which region the given point is in. The regions are determined according to the table to the left, each binary pair being determined (x1,x2) such that: x1 is 1 if the point is above y=x and 0 if the point is below y=x x2 is 1 if the point is above y=-x and 0 if the point is below y=-x The points are generated randomly within a range of -500 to 500.Testing the NN The operation is the same as training the NN, just without updating the weights.Single Step Test Allows the user to manually input a point (x,y) to see if the NN can properly determine which region it belongs in.The Main method Allows the user to select how many training and testing samples to use, what the learning rate should be, and then test with individual points.Testing the NN In testing, I found that the most optimal training rate was 0.2. I arbitrarily chose 10,000 for the amount of training samples and 1,000 for the amount of testing samples. Training Number correct: 9443 In a row: 101Percent correct: 94.43 Testing Number correct: 998 In a row: 177Percent correct: 99.8
Kohonen NN First Attempt
the-comfy-coderIn this exercise, I used a simplified Kohonen Neural Network to cluster data into sections, as seen in the example below. I completed this project as part of my CS620 AI Principals and Methods class in the fall.The assignment was very straight forward. In Java or some other language, have the data cluster in sections as seen in the image to the right, assuming that each box is 200x200.full code can be found on my github Initializing Weights The first thing you need to do in any neural network is initialize the weights of each neuron. This code creates a kohonen network of m Kohonen neurons containing n coordinates. It returns a 2d array of initialized weights. THe weights are initialized between 0 and 600 so they are all within the problem area. Displaying the Results This method just uses graphics to display the output, as seen later in the article.Training Method I chose to hard code in 500000 iterations just for simplicity, this code could easily be modified to allow for the user to choose how many iterations you'd run the program for. While lambda is more than 0, a new point x is created and legalPoint is set to false. THis this point is still false, a new point (x,y) is made and compared to each section of the box to see if it resides inside. If the pint belongs in any of the regions, it is considered a legal point and moves on to the next step. If not the point is disposed of and a new one is randomly produced within the sample space 0-600. If the point was legal, the closest Kohonen neuron is found and added to a list of used neurons. In our display, this changes the color from red to black. Lambda is decreased by lambda/iterations, a really small number. Every other result is displayed using the drawKohonen method previously mentioned. Main Method The kohonen method initializes the weights and uses the trainKohonen method to train the network. I override the paint method to use the kohonenMethod data. Then finally the main method, which creates an object of the kohonen class and displays the graphical output using JFrame.Using this method I was able to get the correct answer. To the right is a short video of the network in action! For this example, 500,000 iterations might be a bit overkill, as the data seems to cluster pretty quickly.
I’m Now a Research Assistant!
comfy-book-reviewsThis research diary is where I am going to record my progress and struggles in my AI research at my university. Hopefully the contents of this blog help me reflect on my growth in the future, showing me how far I have come since day one!For now my focus is on studying how AI is made and the big names in LLMs (Large Language Models). I am centering my research on the most popular ones, such as BERT and the various GPT models, finding what they are used for and how they work (as far as we know) So far there have been a couple of things that interest me. First, I find it very interesting that no one knows exactly how LLMs function. We know that they use word vectors but how exactly they are used seems to be a bit of a mystery as far as my research goes. LLMs do things their own way and I find that fascinating. I wonder if we can ever get them to explain it to us in terms we can understand! The other thing I find interesting is a bit of a game. So far, every model I've been able to use to some extent I have asked a simple question: how many "r"s are there in the word "strawberry"? To humans that can read and count the answer is obvious: 3. But to LLMs it doesn't appear to be that straightforward. All of the models I've been able to ask, GPT-4, GPT-4o, Claude, and Copilot all at first answer 2. They can be coached to the correct answer but I think it is interesting that they all give the same incorrect answer! I will definitely be continuing this little game as I research.
i started a bullet journal!
comfy-blog <3 Even though the school year just ended, I have experienced a rush of energy due to my recovering health. Because of this, I have begun my journey of doing things I wanted to do while I was sick but was too weak/brain-foggy to accomplish. First up: bullet journaling! I have always loved the idea of bullet journaling ever since it first rose in popularity a few years ago. The aesthetic spreads have always littered my Instagram feed, tempting me to dare try. The activity always seemed so daunting, with all of the accessories and stationery products required to even begin! I even bought some, which have remained pretty much unused for many years. But then I discovered digital bullet journaling. At the beginning of lockdown, I received a technology scholarship from my school and used it to buy an iPad Air for online classes. Since then I have been using Goodnotes as my main notetaking platform. Already owning the technology required, digital bullet journaling seemed like the most logical medium to go with. My spreads are made using the Digital Bullet Journal by WAREofSTOCKHOLM on Etsy and weather and spring stickers made by Rina at HappyDownloads. Other stickers were found for free around the web. The original draw to digital bullet journaling for me was how accessible it was. If you already own a tablet as I did, you really don’t need to buy anything to begin. Plenty of creators give out freebies, allowing you to sample their stickers for free. You can also find many on the web, like the Genshin Impact ones I found that are pictured in the spreads. That being said, paying for stickers and layouts simplifies things greatly as I’ve found out. It saves you the time of hunting down a million coordinating pngs, as well as gives money to small businesses like the ones mentioned above. In the end, I am happy with how my spreads have turned out! These and future spreads will definitely be posted on my Instagram so be sure to follow me to be notified when I post!
Vtuber Moodboard
comfy-blog <3 For a few years now I have been watching Vtubers like Ironmouse and Lina. I find the Vtubing community very interesting and inviting. For a while, I have been wanting to learn more and I thought, why wait? So I have decided to start the beginning stages of designing my Vtube model. To start I used Pinterest to find a general idea of what type of aesthetic I wanted to go for. The moodboard to the left is what I was able to come up with. I love pink and soft things so I thought a comfy pastel kidcore theme would be the best for me. Hopefully, this project doesn't get abandoned like all of my others. (rip modding project)
Fighting with Physics
comfy-blog <3 If you had told me months ago that physics was hard, I would've agreed with you. And yet, for some unknown reason, I decided that it would be smart to take intro physics and Calc II at the same time. Boy was that a mistake! I've been so busy trying to keep up with the physics assignments that I've done little else. So now I have to rush to study for my data structures and algorithms final which is in two days and I've barely studied for!!!
Study Blog 1
comfy-blog <3 Today was interesting because the college was on a delay until 11 am due to road conditions. Because I live on a hill, I emailed my professors and asked if I could do class remotely and they said yes! So I had even more time to study than usual, which I definitely used wisely. The rest of the house already had the day off, so the house was full all day. Study Log completed physics syllabus quiz wrote conclusion page for physics lab one went to physics lecture (virtual) calc 1 review I really did very little today. Seasonal depression is hitting hard with all of the cold weather so it is hard to keep up with work. I just want to curl up in a ball and play animal crossing! Hopefully tomorrow I can be on campus and I'll feel more motivated to study.
Hi!
Comfy Blog <3Hello! I'm Fae. This blog has been started on a whim at midnight because I thought it would be fun to record college experiences in this way, adding pictures and such to look back on when I'm older, as well as to entertain any possible readers. To begin though, I want to list the things I am not: a motivational speaker, a study-specific blog, a therapist, a smart person So, take any advice with a massive pile of salt and more often than not do the opposite of what I said here. I am planning on doing some very basic study blogging, posting my task list for longer study sessions. These will not be posted with the intent of guiding others, they are for my personal review in the future and for entertainment purposes.