Previous Entry Share Next Entry
To prove I've been working tonight
2012
unknownj
Okay, so I've got to use BackProp to teach my hidden nodes how to weight inputs and outputs in order to get the right output. We have four inputs with random 0 or 1 going in, so there's no point in thresholding those. I don't really want to be using Sigmoid functions because I don't believe that they're appropriate to the values I'm dealing with, but I expect I'll have to anyway... So I'm dealing with 1 / 1+e^-x, which gives me approximately 1 for values above 0 (for 1 it's 0.75), and about 0 for numbers less than 0. And for 0 itself, it gives 0.5. Put simply, this gives us 0.75 or 0.5, depending on the input.

Given that, I'll run the simulation ten times, with ten sets of random booleans, which ought to give me a good results set. Then I'll take the results and use back prop to fix them. I have four inputs connected to four hiddens via sixteen routes and then those four hidden nodes are connected to the two outputs via just eight routes. The idea is to tune them properly...

The Output nodes will probably use a Sign Function to give a 0 or 1 output, since my output is going to be very simple in terms of what it looks like. So really, all I have is a numerical weight on each of the inputs and the hidden nodes. So I have 8 nodes to play about with, which is fair enough. The general idea is that I get a modulo type output of the input. The first output node gives us Sum(IN) mod 2, and the second node gives us the opposite. If I were to be cheeky, I'd invoke a wicked bad idea of mine, and just use a device on the input function of the second node to make it the opposite of the first node. Something tells me that might not work, but it might. I might need a more complex intermediate stage to pull this off, and if I do, then I can't do that. We'll have to see. In theory, so long as the code works, it doesn't matter if the design is off, because I'll still get marks. I'm not looking for 100% here.

So yeah, do BackProp on that, but that shouldn't be too hard - I've been reading about it, seems simple - take into account the gradient of the error against the weights of the nodes... The second part is a genetic algorithm - I already have a genetic algorithm running on my PC, but it's kinda stupid. It runs in an N-Dimensional Reality Arena, with variable selective pressures and local breeding, for solving multiple goal scenarios. It's convoluted and takes ages to run, but you can come up with 20 answers to a linked puzzle with just one run of the program. It's actually fucking wicked, because it genuinely develops niches, and mimicks nature. But it's too complex for this problem. So I'll have to code one from scratch, which I'm not good at. I built that GA over months of work, this new one might be difficult to get right, but I'll try.

Let it never be said that I don't work, even if all I've done today was look at the problem, plan out ideas for the solution, and read up on them. Tomorrow is my Doing Day. A couple of hours coding a Java object (with appropriate methods) for each node. They'll have methods like addInput(float), processInputs() and a public float getOutput(). in theory, it'll be able to do anything in this puzzle... It'll further have methods for reading and setting weights and functions. BackProp should be easy with it like that, I hope. The GA may be harder, like I said, I'm not sure how I'm doing it yet, but if the worst comes to the worst I can implement my large-scale GA (which I've dubbed SimDarwin) and just use that....

So yeah - been working hard at this :o)

?

Log in

No account? Create an account