<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://old.hacdc.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Omaha</id>
	<title>HacDC Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://old.hacdc.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Omaha"/>
	<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php/Special:Contributions/Omaha"/>
	<updated>2026-05-07T12:13:08Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://old.hacdc.org/index.php?title=Installing_AVR_Toolchain&amp;diff=4644</id>
		<title>Installing AVR Toolchain</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=Installing_AVR_Toolchain&amp;diff=4644"/>
		<updated>2011-03-06T19:49:10Z</updated>

		<summary type="html">&lt;p&gt;Omaha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
There are a few pieces of software you&#039;ll definitely want for AVR programming:&lt;br /&gt;
* A compiler and/or assembler ([http://www.nongnu.org/avr-libc/ avr-gcc]) to convert human-readable code to binary&lt;br /&gt;
* Manipulation of binaries ([http://www.nongnu.org/avr-libc/ binutils-avr]). You&#039;ll need to convert from the ELF file to something your chip will like.&lt;br /&gt;
* Something to talk to your AVR programmer ([http://www.bsdhome.com/avrdude/ AVRDUDE]), that is the piece of hardware you plug into both your computer and the chip you want to program.&lt;br /&gt;
* Not required, but something to make your life easier: ([http://www.gnu.org/software/make/ GNU make])&lt;br /&gt;
Note that both avr-gcc and binutils-avr come from the [http://www.nongnu.org/avr-libc/ avr-libc] project. avr-libc itself isn&#039;t software per-se; it&#039;s a library that implements standard C functions for AVRs.&lt;br /&gt;
&lt;br /&gt;
= Installation =&lt;br /&gt;
== Windows ==&lt;br /&gt;
[http://winavr.sourceforge.net/ WinAVR] has everything you need.&lt;br /&gt;
&lt;br /&gt;
For the programmer type, select AVR109 or Butterfly.  For the serial port, select the USB device.&lt;br /&gt;
&lt;br /&gt;
== OS X ==&lt;br /&gt;
[http://www.obdev.at/products/crosspack/index.html CrossPack] Will take care of you.  It doesn&#039;t require you to have Xcode installed, but if you do, you can do your development in Xcode and run your makefile from that IDE.  If you have an open terminal.app session open when you install it, you&#039;ll need to reload your .profile to use crosspack.&lt;br /&gt;
&lt;br /&gt;
When you install crosspack, you&#039;ll be presented with documentation in your web browser.  These docs are also located at /Applications/Crosspack-AVR-Manual.html.  This is important, as the Crosspack docs are not on the www.obdev.at site :\&lt;br /&gt;
&lt;br /&gt;
=== Making Crosspack projects work with Elliot&#039;s boards ===&lt;br /&gt;
&lt;br /&gt;
Follow the crosspack &#039;getting started&#039; section to create your first hello world project.&lt;br /&gt;
&lt;br /&gt;
First, make a demo project.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; bash$ cd ~/Documents&lt;br /&gt;
bash$ mkdir AVR&lt;br /&gt;
bash$ cd AVR&lt;br /&gt;
bash$ avr-project Demo&lt;br /&gt;
bash$ open Demo &lt;br /&gt;
bash$ cd Demo&lt;br /&gt;
bash$ ls -l&lt;br /&gt;
drwxr-xr-x   5 cs  cs  170 Nov 19 13:58 Demo.xcodeproj&lt;br /&gt;
drwxr-xr-x   4 cs  cs  136 Nov 19 13:58 firmware&lt;br /&gt;
bash$ cd firmware&lt;br /&gt;
bash$ ls -l&lt;br /&gt;
-rw-r--r--   1 cs  cs  4139 Nov 19 13:58 Makefile&lt;br /&gt;
-rw-r--r--   1 cs  cs   348 Nov 19 13:58 main.c&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see, your code lives in the projects&#039; firmware folder.  You can replace the code (*.c) as you please with whatever blinkenlights project you see fit.  You&#039;ll want to open up the Makefile and edit two lines - the DEVICE and PROGARMMER line.  The device we are using is the &amp;quot;atmega88&amp;quot;.  The programmer needs to be set to avr109, the baud rate to 9600, and the port to whatever your /tty.usbserial device (read: FTDI cable) is called.  Mine shows up as /dev/tty.usbserial-FTEA4CYY, yours may very well show up with a different name.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
DEVICE     = atmega88&lt;br /&gt;
CLOCK      = 8000000&lt;br /&gt;
PROGRAMMER = -c avr109 -P /dev/tty.usbserial-FTEA4CYY -b 9600&lt;br /&gt;
OBJECTS    = main.o&lt;br /&gt;
FUSES      = -U hfuse:w:0xd9:m -U lfuse:w:0x24:m\&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One you&#039;ve edited your make file, you can run the following commands&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make&lt;br /&gt;
make flash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will compile the c code into object code and then to the correct HEX code for the device.&lt;br /&gt;
Make flash will try to program the code.  Make sure you&#039;ve held down reset and button A in order to let the device reset into programming mode!&lt;br /&gt;
Grab Elliots blinking led code and try it out!&lt;br /&gt;
&lt;br /&gt;
-Will&lt;br /&gt;
&lt;br /&gt;
== Linux ==&lt;br /&gt;
&lt;br /&gt;
=== Ubuntu ===&lt;br /&gt;
Install the following packages:&lt;br /&gt;
* avrdude&lt;br /&gt;
* avrdude-doc&lt;br /&gt;
* binutils-avr&lt;br /&gt;
* avr-libc&lt;br /&gt;
* gcc-avr&lt;br /&gt;
&lt;br /&gt;
You can get them in one shot using:&lt;br /&gt;
 sudo aptitude install avrdude avrdude-doc binutils-avr avr-libc gcc-avr&lt;br /&gt;
&lt;br /&gt;
=== Gentoo ===&lt;br /&gt;
Install the following packages:&lt;br /&gt;
* dev-embedded/avrdude&lt;br /&gt;
* sys-devel/crossdev&lt;br /&gt;
&lt;br /&gt;
Run (as root):&lt;br /&gt;
 crossdev -t avr&lt;br /&gt;
This will install cross-avr/gcc, cross-avr/binutils, and cross-avr/avr-libc (pulled from an avr portage overlay).&lt;br /&gt;
&lt;br /&gt;
Finally, the following command is necessary to make the linker happy (again, as root):&lt;br /&gt;
 ln -s /usr/lib/binutils/avr/2.21/ldscripts /usr/x86_64-pc-linux-gnu/avr/binutils-bin/2.21/ldscripts&lt;br /&gt;
You&#039;ll want to adjust the path above to match your architecture and binutils version.&lt;br /&gt;
&lt;br /&gt;
====Atmel Dragon with avrdude on Ubuntu  (This may be outdated?  Feel free to ignore.) ====&lt;br /&gt;
Apparently there are two bugs that get in the way when trying to use avrdude with the dragon.&lt;br /&gt;
&lt;br /&gt;
* avrdude 5.8 (via apt-get) segfaults after writing 1 byte: http://savannah.nongnu.org/bugs/?27507 - there is a patch for 5.8 posted there&lt;br /&gt;
* avrdude 5.9 (via the official site) source apparently has some other bug that prevents the build from completing&lt;br /&gt;
&lt;br /&gt;
First, get the dependencies for building the code. &lt;br /&gt;
 sudo apt-get build-dep avrdude&lt;br /&gt;
&lt;br /&gt;
The solution (aside from applying patches to the above versions) is to use the patched 5.10 SVN code. The instructions are from this link: http://www.avrfreaks.net/index.php?name=PNphpBB2&amp;amp;file=printview&amp;amp;t=87972&amp;amp;start=20&lt;br /&gt;
* svn co svn://svn.savannah.nongnu.org/avrdude/trunk .&lt;br /&gt;
* cd avrdude&lt;br /&gt;
* ./bootstrap&lt;br /&gt;
* ./configure&lt;br /&gt;
* ./make&lt;br /&gt;
* sudo ./make install&lt;br /&gt;
&lt;br /&gt;
That seems to have worked for me! I&#039;m on 9.04 32bit and I also installed bison/flex/autoconf --obscurite&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Microcontrollers]]&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
	<entry>
		<id>https://old.hacdc.org/index.php?title=Disassembly&amp;diff=3511</id>
		<title>Disassembly</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=Disassembly&amp;diff=3511"/>
		<updated>2010-06-23T01:33:51Z</updated>

		<summary type="html">&lt;p&gt;Omaha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Some links to get us started:&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:Bit|Bit]]&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:Byte|Byte]]&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:http:Pointer_(computing)|Pointer]]&lt;br /&gt;
&lt;br /&gt;
[[wikipedia:MOS_Technology_6502|6502]]&lt;br /&gt;
&lt;br /&gt;
[http://e-tradition.net/bytes/6502/6502_instruction_set.html 6502 instruction set]&lt;br /&gt;
&lt;br /&gt;
[http://www.atariarchives.org/ Atari 8-bit documentation online]&lt;br /&gt;
&lt;br /&gt;
[http://www.atariarchives.org/mapping/memorymap.php Atari 8-bit memory map]&lt;br /&gt;
&lt;br /&gt;
[http://atari800.sourceforge.net/ Atari 800 emulator]&lt;br /&gt;
&lt;br /&gt;
[http://www.virtualdub.org/altirra.html Altirra - an Atari 800 emulator for Windows]&lt;br /&gt;
&lt;br /&gt;
[http://thepiratebay.org/torrent/5576778 A ton of Atari 8-bit documentation]&lt;br /&gt;
&lt;br /&gt;
[http://sourceforge.net/projects/atari800/files/ROM/Original%20XL%20ROM/xf25.zip/download Atari 8-bit ROMS]&lt;br /&gt;
&lt;br /&gt;
[http://www.atarimania.com/game-atari-400-800-xl-xe-space-invaders_4831.html Space Invaders]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Getting Started&lt;br /&gt;
&lt;br /&gt;
prerequisites&lt;br /&gt;
    knowledge&lt;br /&gt;
    tools&lt;br /&gt;
        emulator&lt;br /&gt;
        cc65&lt;br /&gt;
        roms&lt;br /&gt;
        documentation of the atari 800&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Toolchain&lt;br /&gt;
&lt;br /&gt;
    get and install cc65, etc.&lt;br /&gt;
&lt;br /&gt;
    * description of the various tools included in the package&lt;br /&gt;
&lt;br /&gt;
Set up your info file:&lt;br /&gt;
    jason started with the example info file from the cc65 website&lt;br /&gt;
    http://www.cc65.org/doc/da65-4.html#ss4.7&lt;br /&gt;
&lt;br /&gt;
    ** notes about setting up your info file&lt;br /&gt;
        tell the DA what is code, what is a bytetable, what is a list of addresses&lt;br /&gt;
        tell the DA about the labels you are aware of&lt;br /&gt;
&lt;br /&gt;
        iterative process of creating this info file as you learn more about the program&lt;br /&gt;
&lt;br /&gt;
        eventually naming functions, etc.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
disassemble your ROM&lt;br /&gt;
    what are the command line params&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
re-assemble your ROM &lt;br /&gt;
    first time, this should be from the base output from the disassembly&lt;br /&gt;
    subsequent assemblies will include changes that you made to the ASM in your quest for information&lt;br /&gt;
&lt;br /&gt;
ITERATE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.cc65.org 6502 cross-compiler and tools]&lt;br /&gt;
&lt;br /&gt;
[http://www.atarimania.com/pgemainsoft.awp?type=G&amp;amp;system=8 Atarimania 8-bit games]&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
	<entry>
		<id>https://old.hacdc.org/index.php?title=User:Omaha&amp;diff=3466</id>
		<title>User:Omaha</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=User:Omaha&amp;diff=3466"/>
		<updated>2010-06-18T00:18:11Z</updated>

		<summary type="html">&lt;p&gt;Omaha: Created page with &amp;#039;Alias: omaha  Real Name: Alan McCosh  Email: omaha - at - whilesoftware - dot - com  IRC: omaha on #hacdc on freenode  URL: http://whilesoftware.com  Category:Members  == Pro…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Alias: omaha&lt;br /&gt;
&lt;br /&gt;
Real Name: Alan McCosh&lt;br /&gt;
&lt;br /&gt;
Email: omaha - at - whilesoftware - dot - com&lt;br /&gt;
&lt;br /&gt;
IRC: omaha on #hacdc on freenode&lt;br /&gt;
&lt;br /&gt;
URL: http://whilesoftware.com&lt;br /&gt;
&lt;br /&gt;
[[Category:Members]]&lt;br /&gt;
&lt;br /&gt;
== Projects ==&lt;br /&gt;
&lt;br /&gt;
* Insert Title - an interactive, visual, game-ish environment for developing and competing AIs&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
	<entry>
		<id>https://old.hacdc.org/index.php?title=NARG&amp;diff=3465</id>
		<title>NARG</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=NARG&amp;diff=3465"/>
		<updated>2010-06-18T00:12:41Z</updated>

		<summary type="html">&lt;p&gt;Omaha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the HacDC Natural Language Processing and Artificial Intelligence Group (NARG)&lt;br /&gt;
&lt;br /&gt;
[[Category:Ongoing_Projects]]&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
The mission of NARG is to bring HacDC community members that are interested in NLP and AI together for research, projects, and knowledge sharing. Supporting members in getting projects done is the primary goal. Contact [[User:Obscurite]] for more info.&lt;br /&gt;
&lt;br /&gt;
==Reference and Resources==&lt;br /&gt;
&lt;br /&gt;
Add links to AI/NLP reference material, courseware, etc.&lt;br /&gt;
&lt;br /&gt;
* [http://see.stanford.edu/SEE/lecturelist.aspx?coll=63480b48-8819-4efd-8412-263f1a472f5a Stanford NLP course]&lt;br /&gt;
* [http://spiderland.org/breve/documentation.php Spiderland / Breve docs]&lt;br /&gt;
* [http://csclub.uwaterloo.ca/contest/ Google AI contest]&lt;br /&gt;
* Python Software, Techniques, etc.&lt;br /&gt;
** [http://www.nltk.org/book Natural Language Processing with Python (w/ NLTK)]&lt;br /&gt;
** [http://www.nltk.org NLTK]&lt;br /&gt;
** [http://www.ibm.com/developerworks/linux/library/l-python-mechanize-beautiful-soup/index.html IBM article on web spidering/scraping]&lt;br /&gt;
** [http://simpy.sourceforge.net/ Python discrete simulation library, SimPy]&lt;br /&gt;
** [http://heather.cs.ucdavis.edu/~matloff/simcourse.html Online Book on SimPy &amp;amp; Simulation]&lt;br /&gt;
** [http://code.google.com/edu/languages/google-python-class/ Google&#039;s Online Python Class Materials]&lt;br /&gt;
* Brad&#039;s AI Talks&lt;br /&gt;
** [[Media:Ai_hacdc1.pdf]] - History of AI History of AI and Braitenberg Vehicles.&lt;br /&gt;
** [[Media:Ai_hacdc2.pdf]] - Subsumption architectures.&lt;br /&gt;
** [[Media:backprop.pdf]] - Awesome book chapter on backprop.&lt;br /&gt;
** [[Media:Ai_hacdc3.pdf]] - Intro to Neural Networks.&lt;br /&gt;
** [[Media:Ai_hacdc4.pdf]] - Dimensions Distance and Optimization.&lt;br /&gt;
** [[Media:Pso.tgz]] - Particle Swarm Optimizer in Lua (4.0)&lt;br /&gt;
** [[Media:Ai_hacdc5.pdf]] - Stochastic Search and Neural Networks.&lt;br /&gt;
** [[Media:Ai_hacdc6.pdf]] - Genetic Algorithms.&lt;br /&gt;
** [[Media:Ai_hacdc7.pdf]] -Genetic Programming workshop, part 1.&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
&lt;br /&gt;
Some profiles of our members and what they&#039;re into:&lt;br /&gt;
&lt;br /&gt;
* [[User:Obscurite]] (Daniel Packer) - Interested in emotional interfaces, responsive human interfaces, brain and bio signals, intelligent metadata, and cyborg tech.&lt;br /&gt;
&lt;br /&gt;
* Philip Stewart - Primarily interested in figurative language comprehension, semantics, and digital poetics. Secondarily, event-related potential (ERP) studies, consciousness, and applying scientific findings to philosophical &amp;quot;problematics&amp;quot; in novel ways. Coursework in psycholinguistics, physiological psychology, pharmacology, and functional neuroanatomy.&lt;br /&gt;
&lt;br /&gt;
* Bradford Barr ([[User:bbarr]])&lt;br /&gt;
&lt;br /&gt;
* Darius Roberts - Interested in health, but if there was a way to make a white-label vark.com that would be my first choice of projects.&lt;br /&gt;
&lt;br /&gt;
* Todd Fine - Interested in analyzing the stream of meaning from humans on the internet -- twitter is especially curious. I am a bit obsessed with text-to-speech integrated into ambient soundscapes. Have flirted with various machine learning and ai algorithms, but always need to refresh. I am also interested in simple game AI and strategy. Also like computer word games and computer-generated theater/poetry. Have used nltk and would like to learn more.&lt;br /&gt;
&lt;br /&gt;
* Al Haraka&lt;br /&gt;
&lt;br /&gt;
* [[User:Oberoc]] (Tino Dai)&lt;br /&gt;
&lt;br /&gt;
* Phil Kimmey: Interest in AI, with a primary interest in learning more about non-deterministic approaches and applications, which hopefully will lead to an interest in NLP as well.&lt;br /&gt;
&lt;br /&gt;
* Mike Daren ([[User:Mdaren]]) - Most experience in discrete event-based simulations.&lt;br /&gt;
&lt;br /&gt;
* Michael&lt;br /&gt;
&lt;br /&gt;
* Alan ([[User:omaha]]) - interested in AI for games&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
NARG meets on Thursdays at HacDC from 7-9pm. AI and NLP focus switch every week to give folks 2 weeks to digest the previous meeting&#039;s content/projects.&lt;br /&gt;
&lt;br /&gt;
Other events and cancelations will be announced via the mailing list. Check it out! [http://www.hacdc.org/mailman/listinfo/narg NARG mailman page]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for June 17 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Daniel, Todd, Sean and Jason.&lt;br /&gt;
&lt;br /&gt;
Brad gave his presentation on PushGP, implementing a parser for a limited PushGP-based system, and implementing an evolutionary loop with this parser.&lt;br /&gt;
&lt;br /&gt;
Alan posted his Lua code featuring a working implementation of the evolutionary loop here [[Genetic_programming_example_in_lua]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 28 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Daniel, Darius, Todd, Mike D, Tino, Alan, Dirk, and one other person who found us via google.&lt;br /&gt;
&lt;br /&gt;
Brad presented on GP and continued his workshop. Folks build interpreters and generators for random programs, and those who&#039;d already done it began working on the evolutionary functionality. Next week the workshop will continue.&lt;br /&gt;
&lt;br /&gt;
Daniel posted his python code from part I of Brad&#039;s GP workshop here [[NARG_GP_stacks_code]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 13 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Darius, Alan, Daniel, Todd, Mike D, and Tom C.&lt;br /&gt;
&lt;br /&gt;
Todd gave a talk about using genetic algorithms (based on a paper by Karl Sims) to create aesthetically pleasing 2D graphical images and textures. He deployed Pyevolve and the Python Imaging Library. The slides are available here: http://prezi.com/enn87uvfm-jc/artificial-evolution-for-computer-graphics/&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 6 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Nikolas, Daniel, Alan, Todd, Phil S, Mike D, and Bjorn&lt;br /&gt;
&lt;br /&gt;
Brad gave a talk on Genetic Algorithms and showed off a LUA implementation. Slides will be posted.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 29 2010 ===&lt;br /&gt;
&lt;br /&gt;
In a brutal battle between gorgeous weather and HacDC NARG on Thursday, NARG suffered a humiliating defeat when gorgeous weather was presumably the cause of the lowest NARG attendance ever witnessed. Reportedly, Daniel Packer arrived on the scene at 7pm to find an empty house and by 7:30 had eaten all dozen mini muffins that he&#039;d intended to pawn off on NARG attendees. &amp;quot;We&#039;ll get you next time, gorgeous weather,&amp;quot; Daniel was quoted as muttering as he wandered off into the sunshine, full of mini muffin.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 22 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendace were Daniel, Brad, Alan, Dirk (new to NARG), Melissa (new to NARG), Phil S., Tino, Mike D.&lt;br /&gt;
&lt;br /&gt;
Brad presented more on Particle Swarm Optimization and started on Neural Nets (Perceptrons).&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 15 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Mike, and Todd F&lt;br /&gt;
&lt;br /&gt;
We discussed a web spidering project and started looking into a python project using mechanize, beautiful soup, and NLTK. Todd suggested looking at presidents and we turned to Wikipedia for a source of content. &lt;br /&gt;
&lt;br /&gt;
Daniel and Mike hacked on spidering while Todd did NLKT research and set up a git repos (that we are still figuring out). This code downloads president wikipedia entries, pickles and saves them, cleans and saves them. Next step is to tokenize and process in NLTK. Will be put into a git repos (when Todd gets time). When you run it the first time it will download and serialize the data from wikipedia. (Please check wikipedia terms and conditions, license, EULA, etc before running) &lt;br /&gt;
&lt;br /&gt;
* [[File:narg_pypres.tgz]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 8 2010 ===&lt;br /&gt;
&lt;br /&gt;
Brad did a great presentation on Swarms and optimization problems, non-euclidean spaces...&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 1 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Darius, Todd, and Brad,&lt;br /&gt;
&lt;br /&gt;
We attempted to get Darius&#039;s Rovio robot going, but had networking issues. Todd did an overview on K means clustering algorithm and clustering in general using the Collective Intelligence book (listed in resources) as a reference. Brad gave some insights into generalization of the Euclidean distance calculations from a math perspective - there are different distance equations for clustering and he mentioned at NASA manhattan distance was very useful for artificial vision. We brainstormed on ways to use clustering for social networks and other web databases. We also discussed potential hadoop/map reduce projects using pycloud or other cloud processing services. The meeting closed with burritos, fried tacos, and a bit of late night hacking.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 18 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Brad, Mike, and New-Mike.&lt;br /&gt;
&lt;br /&gt;
We had a general discussion about many things.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 11 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Darius, Daniel, Brad, Phil Stewart, Mike, and A.J.&lt;br /&gt;
&lt;br /&gt;
Brad presented on Subsumption architectures. He will attach slides for this and the previous presentation. We watched a Breve demo of Brad&#039;s subsumption implementation (a very abstracted version equivalent to nested ifs), and he did some live coding which was fun. &lt;br /&gt;
&lt;br /&gt;
Brad suggested a long term contest idea analogous to Hackerspaces in Space, maybe using pygame. We discussed various ideas that would make fun competitions. &lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 4 2010 ===&lt;br /&gt;
&lt;br /&gt;
Second hand minutes about meeting from Daniel (did not attend due to sched. conflict):&lt;br /&gt;
* NLTK intro from Todd Fine (first few chapters of NLTK book - see resources section for link)&lt;br /&gt;
* Discussion of approaches to AI vs NLP in group (AI more game/sim oriented NLP more machine learning oriented i.e. bayesian)&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Feb 25 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Nikolas, Todd, Phil, Michael, Daniel, Brad, and Darius. &lt;br /&gt;
&lt;br /&gt;
* We agreed to alternate AI and NLP topics every other week to give people more time to digest material and lighten the burden of presenters/teachers&lt;br /&gt;
* Daniel will present on NLP/NLTK next meeting&lt;br /&gt;
&lt;br /&gt;
Brad did a great demo of several Breve simulations including the capture the flag simulation he ported to python from a class he&#039;d taken. We looked at simulations of Braitenberg machines moving towards or away from stimulation sources. We analyzed the two existing CTF bots and looked at the code that defines them, and asked Brad a lot of questions about what the bots could do in code (there are a lot of specifics!) We&#039;re supposed to install Breve for the next AI focus meeting and start poking at the code. &lt;br /&gt;
&lt;br /&gt;
During Brad&#039;s presentation at the point where he briefly covered AI history, there was a fascinating conversation between Brad, Nikolas and Todd about ways to define and contrast machine learning and AI. In the end it seemed the consensus was that machine learning is a rigorous academic field with a focus on mathematics and numerical analysis, whereas AI is more general, and has a more philosophical bent. Brad said that in his school days, the machine learning profs would make a point to say they weren&#039;t in &amp;quot;AI&amp;quot;. Nikolas posited that it might be due to the stigma AI received from it&#039;s failures to achieve the rapid results it promised early on, and that seemed logical.&lt;br /&gt;
&lt;br /&gt;
The code for CTF has been put up on a [http://github.com/jdar/ctf github]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Feb 20 2010 ===&lt;br /&gt;
&lt;br /&gt;
The first NARG meeting was held on Feb 20, 2010 at Sticky Fingers Bakery. In attendance were Brad, Darius, Phil (not Stewart - a HacDC newcomer), and Daniel. The conversation was relatively free form but a few suggestions were favored:&lt;br /&gt;
&lt;br /&gt;
* Meetings will be ongoing at HacDC on Thursday evenings at 7pm, realizing that due to the high frequency of meetings, some folks will miss some meetings.&lt;br /&gt;
* Brad will put together a demo/tutorial using the spiderland.org breve environment on Brattenberg Vehicles as an entry point into AI learning. We will collectively try to use this environment (virtual 3d world with actuators and sensors for 3d movement and input) and graduate to Subsumption Architectures and neural nets.  We&#039;ll use python since most people are willing to use it and have at least played with it, though Brad personally prefers Steve (spelling?? - some unholy combo of smalltalk and javascript?) (correct this info)&lt;br /&gt;
** http://spiderland.org&lt;br /&gt;
** http://en.wikipedia.org/wiki/Braitenberg_vehicles&lt;br /&gt;
** http://en.wikipedia.org/wiki/Subsumption_architecture&lt;br /&gt;
* Daniel will put together a demo/tutorial based on NLTK and the book &amp;quot;Natural Language Processing with Python&amp;quot;, which he has a copy of for reference.&lt;br /&gt;
* We will eventually choose a robotics platform for physical AI, either a repurposed roomba type solution (favored by Phil) or an open avr/arduino/ucontroller based bot like: http://www.adafruit.com/blog/2009/04/20/arduino-powered-braitenberg-vehicle-light-seeking-robot/&lt;br /&gt;
&lt;br /&gt;
Other topics:&lt;br /&gt;
* Brad, Todd, Darius and Daniel have downloaded the google AI tron code - Brad and Todd have working custom code and we will keep an eye out for good show and tell opportunities. Brad&#039;s solution is a neural net based one.&lt;br /&gt;
* Daniel brought up the idea of machine readable codification of human ideas/statements and the political ramifications after Phil mentioned .gov open data and how it&#039;s not well formatted for real time use. Brad mentioned the language http://www.lojban.org/tiki/Lojban - which attempts to remove ambiguity.&lt;br /&gt;
* Daniel is interested in using AI for bio signals interpretation and NLP for emotionally contextual interfaces/digital ghosts. Darius is interested in using NLP for matching content with expertise, like http://vark.com which got acquired by google a week or so ago. Brad is interested in AI as a practitioner (it&#039;s his job) and wants to do some virtual 3d simulations. Phil is open to pretty much anything (he&#039;s too young to know better).&lt;br /&gt;
* Brad suggested there were ways to bridge AI and NLP. The idea of bridging NLP and AI via the use of agent based AI that use NLP based communication models in evolutionary scenarios was brought up by Daniel and it generally convinced everyone there were some exciting potential bridges between the two disciplines.&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
	<entry>
		<id>https://old.hacdc.org/index.php?title=NARG&amp;diff=3464</id>
		<title>NARG</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=NARG&amp;diff=3464"/>
		<updated>2010-06-18T00:10:48Z</updated>

		<summary type="html">&lt;p&gt;Omaha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the HacDC Natural Language Processing and Artificial Intelligence Group (NARG)&lt;br /&gt;
&lt;br /&gt;
[[Category:Ongoing_Projects]]&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
The mission of NARG is to bring HacDC community members that are interested in NLP and AI together for research, projects, and knowledge sharing. Supporting members in getting projects done is the primary goal. Contact [[User:Obscurite]] for more info.&lt;br /&gt;
&lt;br /&gt;
==Reference and Resources==&lt;br /&gt;
&lt;br /&gt;
Add links to AI/NLP reference material, courseware, etc.&lt;br /&gt;
&lt;br /&gt;
* [http://see.stanford.edu/SEE/lecturelist.aspx?coll=63480b48-8819-4efd-8412-263f1a472f5a Stanford NLP course]&lt;br /&gt;
* [http://spiderland.org/breve/documentation.php Spiderland / Breve docs]&lt;br /&gt;
* [http://csclub.uwaterloo.ca/contest/ Google AI contest]&lt;br /&gt;
* Python Software, Techniques, etc.&lt;br /&gt;
** [http://www.nltk.org/book Natural Language Processing with Python (w/ NLTK)]&lt;br /&gt;
** [http://www.nltk.org NLTK]&lt;br /&gt;
** [http://www.ibm.com/developerworks/linux/library/l-python-mechanize-beautiful-soup/index.html IBM article on web spidering/scraping]&lt;br /&gt;
** [http://simpy.sourceforge.net/ Python discrete simulation library, SimPy]&lt;br /&gt;
** [http://heather.cs.ucdavis.edu/~matloff/simcourse.html Online Book on SimPy &amp;amp; Simulation]&lt;br /&gt;
** [http://code.google.com/edu/languages/google-python-class/ Google&#039;s Online Python Class Materials]&lt;br /&gt;
* Brad&#039;s AI Talks&lt;br /&gt;
** [[Media:Ai_hacdc1.pdf]] - History of AI History of AI and Braitenberg Vehicles.&lt;br /&gt;
** [[Media:Ai_hacdc2.pdf]] - Subsumption architectures.&lt;br /&gt;
** [[Media:backprop.pdf]] - Awesome book chapter on backprop.&lt;br /&gt;
** [[Media:Ai_hacdc3.pdf]] - Intro to Neural Networks.&lt;br /&gt;
** [[Media:Ai_hacdc4.pdf]] - Dimensions Distance and Optimization.&lt;br /&gt;
** [[Media:Pso.tgz]] - Particle Swarm Optimizer in Lua (4.0)&lt;br /&gt;
** [[Media:Ai_hacdc5.pdf]] - Stochastic Search and Neural Networks.&lt;br /&gt;
** [[Media:Ai_hacdc6.pdf]] - Genetic Algorithms.&lt;br /&gt;
** [[Media:Ai_hacdc7.pdf]] -Genetic Programming workshop, part 1.&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
&lt;br /&gt;
Some profiles of our members and what they&#039;re into:&lt;br /&gt;
&lt;br /&gt;
* [[User:Obscurite]] (Daniel Packer) - Interested in emotional interfaces, responsive human interfaces, brain and bio signals, intelligent metadata, and cyborg tech.&lt;br /&gt;
&lt;br /&gt;
* Philip Stewart - Primarily interested in figurative language comprehension, semantics, and digital poetics. Secondarily, event-related potential (ERP) studies, consciousness, and applying scientific findings to philosophical &amp;quot;problematics&amp;quot; in novel ways. Coursework in psycholinguistics, physiological psychology, pharmacology, and functional neuroanatomy.&lt;br /&gt;
&lt;br /&gt;
* Bradford Barr ([[User:bbarr]])&lt;br /&gt;
&lt;br /&gt;
* Darius Roberts - Interested in health, but if there was a way to make a white-label vark.com that would be my first choice of projects.&lt;br /&gt;
&lt;br /&gt;
* Todd Fine - Interested in analyzing the stream of meaning from humans on the internet -- twitter is especially curious. I am a bit obsessed with text-to-speech integrated into ambient soundscapes. Have flirted with various machine learning and ai algorithms, but always need to refresh. I am also interested in simple game AI and strategy. Also like computer word games and computer-generated theater/poetry. Have used nltk and would like to learn more.&lt;br /&gt;
&lt;br /&gt;
* Al Haraka&lt;br /&gt;
&lt;br /&gt;
* [[User:Oberoc]] (Tino Dai)&lt;br /&gt;
&lt;br /&gt;
* Phil Kimmey: Interest in AI, with a primary interest in learning more about non-deterministic approaches and applications, which hopefully will lead to an interest in NLP as well.&lt;br /&gt;
&lt;br /&gt;
* Mike Daren ([[User:Mdaren]]) - Most experience in discrete event-based simulations.&lt;br /&gt;
&lt;br /&gt;
* Michael&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
NARG meets on Thursdays at HacDC from 7-9pm. AI and NLP focus switch every week to give folks 2 weeks to digest the previous meeting&#039;s content/projects.&lt;br /&gt;
&lt;br /&gt;
Other events and cancelations will be announced via the mailing list. Check it out! [http://www.hacdc.org/mailman/listinfo/narg NARG mailman page]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for June 17 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Daniel, Todd, Sean and Jason.&lt;br /&gt;
&lt;br /&gt;
Brad gave his presentation on PushGP, implementing a parser for a limited PushGP-based system, and implementing an evolutionary loop with this parser.&lt;br /&gt;
&lt;br /&gt;
Alan posted his Lua code featuring a working implementation of the evolutionary loop here [[Genetic_programming_example_in_lua]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 28 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Daniel, Darius, Todd, Mike D, Tino, Alan, Dirk, and one other person who found us via google.&lt;br /&gt;
&lt;br /&gt;
Brad presented on GP and continued his workshop. Folks build interpreters and generators for random programs, and those who&#039;d already done it began working on the evolutionary functionality. Next week the workshop will continue.&lt;br /&gt;
&lt;br /&gt;
Daniel posted his python code from part I of Brad&#039;s GP workshop here [[NARG_GP_stacks_code]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 13 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Darius, Alan, Daniel, Todd, Mike D, and Tom C.&lt;br /&gt;
&lt;br /&gt;
Todd gave a talk about using genetic algorithms (based on a paper by Karl Sims) to create aesthetically pleasing 2D graphical images and textures. He deployed Pyevolve and the Python Imaging Library. The slides are available here: http://prezi.com/enn87uvfm-jc/artificial-evolution-for-computer-graphics/&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for May 6 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Brad, Nikolas, Daniel, Alan, Todd, Phil S, Mike D, and Bjorn&lt;br /&gt;
&lt;br /&gt;
Brad gave a talk on Genetic Algorithms and showed off a LUA implementation. Slides will be posted.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 29 2010 ===&lt;br /&gt;
&lt;br /&gt;
In a brutal battle between gorgeous weather and HacDC NARG on Thursday, NARG suffered a humiliating defeat when gorgeous weather was presumably the cause of the lowest NARG attendance ever witnessed. Reportedly, Daniel Packer arrived on the scene at 7pm to find an empty house and by 7:30 had eaten all dozen mini muffins that he&#039;d intended to pawn off on NARG attendees. &amp;quot;We&#039;ll get you next time, gorgeous weather,&amp;quot; Daniel was quoted as muttering as he wandered off into the sunshine, full of mini muffin.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 22 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendace were Daniel, Brad, Alan, Dirk (new to NARG), Melissa (new to NARG), Phil S., Tino, Mike D.&lt;br /&gt;
&lt;br /&gt;
Brad presented more on Particle Swarm Optimization and started on Neural Nets (Perceptrons).&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 15 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Mike, and Todd F&lt;br /&gt;
&lt;br /&gt;
We discussed a web spidering project and started looking into a python project using mechanize, beautiful soup, and NLTK. Todd suggested looking at presidents and we turned to Wikipedia for a source of content. &lt;br /&gt;
&lt;br /&gt;
Daniel and Mike hacked on spidering while Todd did NLKT research and set up a git repos (that we are still figuring out). This code downloads president wikipedia entries, pickles and saves them, cleans and saves them. Next step is to tokenize and process in NLTK. Will be put into a git repos (when Todd gets time). When you run it the first time it will download and serialize the data from wikipedia. (Please check wikipedia terms and conditions, license, EULA, etc before running) &lt;br /&gt;
&lt;br /&gt;
* [[File:narg_pypres.tgz]]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 8 2010 ===&lt;br /&gt;
&lt;br /&gt;
Brad did a great presentation on Swarms and optimization problems, non-euclidean spaces...&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Apr 1 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Darius, Todd, and Brad,&lt;br /&gt;
&lt;br /&gt;
We attempted to get Darius&#039;s Rovio robot going, but had networking issues. Todd did an overview on K means clustering algorithm and clustering in general using the Collective Intelligence book (listed in resources) as a reference. Brad gave some insights into generalization of the Euclidean distance calculations from a math perspective - there are different distance equations for clustering and he mentioned at NASA manhattan distance was very useful for artificial vision. We brainstormed on ways to use clustering for social networks and other web databases. We also discussed potential hadoop/map reduce projects using pycloud or other cloud processing services. The meeting closed with burritos, fried tacos, and a bit of late night hacking.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 18 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Daniel, Brad, Mike, and New-Mike.&lt;br /&gt;
&lt;br /&gt;
We had a general discussion about many things.&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 11 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Darius, Daniel, Brad, Phil Stewart, Mike, and A.J.&lt;br /&gt;
&lt;br /&gt;
Brad presented on Subsumption architectures. He will attach slides for this and the previous presentation. We watched a Breve demo of Brad&#039;s subsumption implementation (a very abstracted version equivalent to nested ifs), and he did some live coding which was fun. &lt;br /&gt;
&lt;br /&gt;
Brad suggested a long term contest idea analogous to Hackerspaces in Space, maybe using pygame. We discussed various ideas that would make fun competitions. &lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Mar 4 2010 ===&lt;br /&gt;
&lt;br /&gt;
Second hand minutes about meeting from Daniel (did not attend due to sched. conflict):&lt;br /&gt;
* NLTK intro from Todd Fine (first few chapters of NLTK book - see resources section for link)&lt;br /&gt;
* Discussion of approaches to AI vs NLP in group (AI more game/sim oriented NLP more machine learning oriented i.e. bayesian)&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Feb 25 2010 ===&lt;br /&gt;
&lt;br /&gt;
In attendance were Nikolas, Todd, Phil, Michael, Daniel, Brad, and Darius. &lt;br /&gt;
&lt;br /&gt;
* We agreed to alternate AI and NLP topics every other week to give people more time to digest material and lighten the burden of presenters/teachers&lt;br /&gt;
* Daniel will present on NLP/NLTK next meeting&lt;br /&gt;
&lt;br /&gt;
Brad did a great demo of several Breve simulations including the capture the flag simulation he ported to python from a class he&#039;d taken. We looked at simulations of Braitenberg machines moving towards or away from stimulation sources. We analyzed the two existing CTF bots and looked at the code that defines them, and asked Brad a lot of questions about what the bots could do in code (there are a lot of specifics!) We&#039;re supposed to install Breve for the next AI focus meeting and start poking at the code. &lt;br /&gt;
&lt;br /&gt;
During Brad&#039;s presentation at the point where he briefly covered AI history, there was a fascinating conversation between Brad, Nikolas and Todd about ways to define and contrast machine learning and AI. In the end it seemed the consensus was that machine learning is a rigorous academic field with a focus on mathematics and numerical analysis, whereas AI is more general, and has a more philosophical bent. Brad said that in his school days, the machine learning profs would make a point to say they weren&#039;t in &amp;quot;AI&amp;quot;. Nikolas posited that it might be due to the stigma AI received from it&#039;s failures to achieve the rapid results it promised early on, and that seemed logical.&lt;br /&gt;
&lt;br /&gt;
The code for CTF has been put up on a [http://github.com/jdar/ctf github]&lt;br /&gt;
&lt;br /&gt;
=== Meeting minutes for Feb 20 2010 ===&lt;br /&gt;
&lt;br /&gt;
The first NARG meeting was held on Feb 20, 2010 at Sticky Fingers Bakery. In attendance were Brad, Darius, Phil (not Stewart - a HacDC newcomer), and Daniel. The conversation was relatively free form but a few suggestions were favored:&lt;br /&gt;
&lt;br /&gt;
* Meetings will be ongoing at HacDC on Thursday evenings at 7pm, realizing that due to the high frequency of meetings, some folks will miss some meetings.&lt;br /&gt;
* Brad will put together a demo/tutorial using the spiderland.org breve environment on Brattenberg Vehicles as an entry point into AI learning. We will collectively try to use this environment (virtual 3d world with actuators and sensors for 3d movement and input) and graduate to Subsumption Architectures and neural nets.  We&#039;ll use python since most people are willing to use it and have at least played with it, though Brad personally prefers Steve (spelling?? - some unholy combo of smalltalk and javascript?) (correct this info)&lt;br /&gt;
** http://spiderland.org&lt;br /&gt;
** http://en.wikipedia.org/wiki/Braitenberg_vehicles&lt;br /&gt;
** http://en.wikipedia.org/wiki/Subsumption_architecture&lt;br /&gt;
* Daniel will put together a demo/tutorial based on NLTK and the book &amp;quot;Natural Language Processing with Python&amp;quot;, which he has a copy of for reference.&lt;br /&gt;
* We will eventually choose a robotics platform for physical AI, either a repurposed roomba type solution (favored by Phil) or an open avr/arduino/ucontroller based bot like: http://www.adafruit.com/blog/2009/04/20/arduino-powered-braitenberg-vehicle-light-seeking-robot/&lt;br /&gt;
&lt;br /&gt;
Other topics:&lt;br /&gt;
* Brad, Todd, Darius and Daniel have downloaded the google AI tron code - Brad and Todd have working custom code and we will keep an eye out for good show and tell opportunities. Brad&#039;s solution is a neural net based one.&lt;br /&gt;
* Daniel brought up the idea of machine readable codification of human ideas/statements and the political ramifications after Phil mentioned .gov open data and how it&#039;s not well formatted for real time use. Brad mentioned the language http://www.lojban.org/tiki/Lojban - which attempts to remove ambiguity.&lt;br /&gt;
* Daniel is interested in using AI for bio signals interpretation and NLP for emotionally contextual interfaces/digital ghosts. Darius is interested in using NLP for matching content with expertise, like http://vark.com which got acquired by google a week or so ago. Brad is interested in AI as a practitioner (it&#039;s his job) and wants to do some virtual 3d simulations. Phil is open to pretty much anything (he&#039;s too young to know better).&lt;br /&gt;
* Brad suggested there were ways to bridge AI and NLP. The idea of bridging NLP and AI via the use of agent based AI that use NLP based communication models in evolutionary scenarios was brought up by Daniel and it generally convinced everyone there were some exciting potential bridges between the two disciplines.&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
	<entry>
		<id>https://old.hacdc.org/index.php?title=Genetic_programming_example_in_lua&amp;diff=3463</id>
		<title>Genetic programming example in lua</title>
		<link rel="alternate" type="text/html" href="https://old.hacdc.org/index.php?title=Genetic_programming_example_in_lua&amp;diff=3463"/>
		<updated>2010-06-18T00:07:01Z</updated>

		<summary type="html">&lt;p&gt;Omaha: genetic programming w/ lua&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following code is capable of converging on a solution to Brad&#039;s parabola problem, which can be found on the [[NARG]] page&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
master_stack = {}&lt;br /&gt;
function_table = {}&lt;br /&gt;
type_array = {}&lt;br /&gt;
type_stack = {}&lt;br /&gt;
&lt;br /&gt;
stack_size = 32&lt;br /&gt;
&lt;br /&gt;
num_types = 0&lt;br /&gt;
&lt;br /&gt;
num_x_samples = 5&lt;br /&gt;
constant = {0,1,2,3,4}&lt;br /&gt;
expected_result = {0, 4.75, 12.75, 23.98, 38.45, 56.15}&lt;br /&gt;
function sleep(n)&lt;br /&gt;
	os.execute(&amp;quot;sleep &amp;quot; .. tonumber(n))&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function deepcopy(object)&lt;br /&gt;
    local lookup_table = {}&lt;br /&gt;
	local function _copy(object)&lt;br /&gt;
	if type(object) ~= &amp;quot;table&amp;quot; then&lt;br /&gt;
	    return object&lt;br /&gt;
	elseif lookup_table[object] then&lt;br /&gt;
		return lookup_table[object]&lt;br /&gt;
    end&lt;br /&gt;
    local new_table = {}&lt;br /&gt;
    lookup_table[object] = new_table&lt;br /&gt;
    for index, value in pairs(object) do&lt;br /&gt;
        new_table[_copy(index)] = _copy(value)&lt;br /&gt;
	end&lt;br /&gt;
	return setmetatable(new_table, getmetatable(object))&lt;br /&gt;
    end&lt;br /&gt;
    return _copy(object)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function print_table(theTable, indent)&lt;br /&gt;
	&lt;br /&gt;
	local iString = &amp;quot;&amp;quot;&lt;br /&gt;
	for index = 1, indent do&lt;br /&gt;
		iString = iString .. &amp;quot;-&amp;quot;&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- walk all the topmost values in the table&lt;br /&gt;
	for k,v in pairs(theTable) do&lt;br /&gt;
		print(iString ,k ,v)&lt;br /&gt;
		if type(v) == &amp;quot;table&amp;quot; then&lt;br /&gt;
			print_table(v, indent + 1)&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function print_program(theProgram)&lt;br /&gt;
	print(&amp;quot;Program Stack Bottom&amp;quot;)&lt;br /&gt;
	for counter = 1, table.getn(theProgram) do&lt;br /&gt;
		if (theProgram[counter].name == &amp;quot;real&amp;quot;) then&lt;br /&gt;
			print(theProgram[counter].value)&lt;br /&gt;
		else&lt;br /&gt;
			print(theProgram[counter].name)&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
	print(&amp;quot;Program Stack Top&amp;quot;)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
-- add the type to the type table&lt;br /&gt;
function insert_function(functionName, functionCall, functionType, nArguments)&lt;br /&gt;
	-- ensure we haven&#039;t already inserted this function&lt;br /&gt;
	if function_table[functionName] ~= nil then&lt;br /&gt;
		print(&amp;quot;function already defined: &amp;quot; .. functionName)&lt;br /&gt;
		return false&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- add this function to the function_table&lt;br /&gt;
	tempFunction = {}&lt;br /&gt;
	tempFunction.fName = functionCall&lt;br /&gt;
	tempFunction.fType = functionType&lt;br /&gt;
	tempFunction.nArgs = nArguments&lt;br /&gt;
&lt;br /&gt;
	function_table[functionName] = tempFunction&lt;br /&gt;
	return true&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function insert_type(typeName, fPointer, weight)&lt;br /&gt;
	-- walk the list of types and make sure this one hasn&#039;t already been defined&lt;br /&gt;
	for index = 1, table.getn(type_array) do&lt;br /&gt;
		if (type_array[index].name == typeName) then&lt;br /&gt;
			print(&amp;quot;type already defined: &amp;quot; .. typeName)&lt;br /&gt;
			return false&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	num_types = num_types + 1&lt;br /&gt;
	type_array[num_types] = {}&lt;br /&gt;
	type_array[num_types].name = typeName&lt;br /&gt;
	type_array[num_types].fPointer = fPointer&lt;br /&gt;
	type_array[num_types].weight = weight&lt;br /&gt;
&lt;br /&gt;
	if fPointer == nil then&lt;br /&gt;
		-- establish the stack for this type&lt;br /&gt;
		type_stack[typeName] = {}&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	return true&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function local_add(arguments)&lt;br /&gt;
	-- print(&amp;quot;adding &amp;quot; .. arguments[1] .. &amp;quot; and &amp;quot; .. arguments[2])&lt;br /&gt;
	return arguments[1] + arguments[2]&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function local_subtract(arguments)&lt;br /&gt;
	-- print(&amp;quot;subtracting &amp;quot; .. arguments[1] .. &amp;quot; and &amp;quot; .. arguments[2])&lt;br /&gt;
	return arguments[1] + arguments[2]&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function local_multiply(arguments)&lt;br /&gt;
	-- print(&amp;quot;multiplying &amp;quot; .. arguments[1] .. &amp;quot; and &amp;quot; .. arguments[2])&lt;br /&gt;
	return arguments[1] * arguments[2]&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function local_divide(arguments)&lt;br /&gt;
	-- print(&amp;quot;dividing &amp;quot; .. arguments[1] .. &amp;quot; and &amp;quot; .. arguments[2])&lt;br /&gt;
	-- dragon&lt;br /&gt;
	if (arguments[2] == 0) then&lt;br /&gt;
			arguments[2] = 0.00001&lt;br /&gt;
	end&lt;br /&gt;
	return arguments[1] / arguments[2]&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function some_constant()&lt;br /&gt;
	-- print(&amp;quot;pushing constant onto stack:&amp;quot; .. constant[currentConstant])&lt;br /&gt;
	return constant[currentConstant]&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function establish_types()&lt;br /&gt;
	-- add each of our types to the type_table&lt;br /&gt;
	insert_type(&amp;quot;real&amp;quot;, nil, 10)&lt;br /&gt;
	insert_type(&amp;quot;+&amp;quot;, local_add, 1)&lt;br /&gt;
	insert_type(&amp;quot;*&amp;quot;, local_multiply, 1)&lt;br /&gt;
	--insert_type(&amp;quot;-&amp;quot;, local_subtract, 1)&lt;br /&gt;
	--insert_type(&amp;quot;/&amp;quot;, local_divide, 1)&lt;br /&gt;
	insert_type(&amp;quot;X&amp;quot;, some_constant, 5)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function establish_functions()&lt;br /&gt;
	insert_function(&amp;quot;+&amp;quot;, local_add, &amp;quot;real&amp;quot;, 2)&lt;br /&gt;
	insert_function(&amp;quot;-&amp;quot;, local_subtract, &amp;quot;real&amp;quot;, 2)&lt;br /&gt;
	insert_function(&amp;quot;*&amp;quot;, local_multiply, &amp;quot;real&amp;quot;,  2)&lt;br /&gt;
	insert_function(&amp;quot;/&amp;quot;, local_divide, &amp;quot;real&amp;quot;,  2)&lt;br /&gt;
	insert_function(&amp;quot;X&amp;quot;, some_constant, &amp;quot;real&amp;quot;, 0)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function generate_program(programSize )&lt;br /&gt;
	return_stack = {}&lt;br /&gt;
	&lt;br /&gt;
	for counter = 1, programSize do&lt;br /&gt;
		currentNode = {}&lt;br /&gt;
		ranVal = math.random(1,table.getn(type_array))&lt;br /&gt;
		currentNode.name = type_array[ranVal].name&lt;br /&gt;
		-- beware hardcoded stuffs&lt;br /&gt;
		if currentNode.name == &amp;quot;real&amp;quot; then&lt;br /&gt;
			currentNode.value = math.random()&lt;br /&gt;
		end&lt;br /&gt;
&lt;br /&gt;
		table.insert(return_stack, currentNode)&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	return return_stack&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function process_master()&lt;br /&gt;
	while table.getn(master_stack) ~= 0 do&lt;br /&gt;
	--	print(&amp;quot;frame begin-------------------------------&amp;quot;)&lt;br /&gt;
	--	print(&amp;quot;current table:&amp;quot;)&lt;br /&gt;
		-- print_table(type_stack[&amp;quot;real&amp;quot;], 0)&lt;br /&gt;
		&lt;br /&gt;
		currentNode = table.remove(master_stack)&lt;br /&gt;
	--	print(&amp;quot;curret node name: &amp;quot; .. currentNode.name )&lt;br /&gt;
&lt;br /&gt;
		-- treat functions and values differently&lt;br /&gt;
		if currentNode.name == &amp;quot;real&amp;quot; then&lt;br /&gt;
		--	print(&amp;quot;current node value: &amp;quot; .. currentNode.value)&lt;br /&gt;
			-- add this value to the &#039;real&#039; stack&lt;br /&gt;
			table.insert(type_stack[&amp;quot;real&amp;quot;], currentNode.value)&lt;br /&gt;
		else&lt;br /&gt;
			-- grab the num of params needed for this function&lt;br /&gt;
			nRequired = function_table[currentNode.name].nArgs&lt;br /&gt;
			theType = function_table[currentNode.name].fType&lt;br /&gt;
&lt;br /&gt;
			-- make sure there are enough objects on the param stack to call this function&lt;br /&gt;
			-- print(&amp;quot;name = &amp;quot; .. currentNode.name)&lt;br /&gt;
			-- print(function_table[currentNode.name].fType)&lt;br /&gt;
			-- print(type_stack[&amp;quot;real&amp;quot;])&lt;br /&gt;
			if (table.getn(type_stack[function_table[currentNode.name].fType]) &amp;lt; nRequired) then&lt;br /&gt;
				-- not enough params available, NOOP&lt;br /&gt;
			--	print(&amp;quot;not enough params, NOOP&amp;quot;)&lt;br /&gt;
			else&lt;br /&gt;
				theArguments = {}&lt;br /&gt;
				-- build an array for passing the params to the function&lt;br /&gt;
				for counter = 1, nRequired do&lt;br /&gt;
					theArguments[counter] = table.remove(type_stack[theType])&lt;br /&gt;
				end&lt;br /&gt;
&lt;br /&gt;
				-- call the function&lt;br /&gt;
				returnVal = function_table[currentNode.name].fName(theArguments)&lt;br /&gt;
&lt;br /&gt;
				-- push the return val to the appropriate stack&lt;br /&gt;
				table.insert(type_stack[function_table[currentNode.name].fType], returnVal)&lt;br /&gt;
&lt;br /&gt;
			end&lt;br /&gt;
		end&lt;br /&gt;
	--	print(&amp;quot;frame end---------------------------------&amp;quot;)&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function grab_result()&lt;br /&gt;
	-- print the top of the &amp;quot;real&amp;quot; stack&lt;br /&gt;
	if (table.getn(type_stack[&amp;quot;real&amp;quot;]) == 0) then&lt;br /&gt;
		return 9999999&lt;br /&gt;
	end&lt;br /&gt;
	return table.remove(type_stack[&amp;quot;real&amp;quot;])&lt;br /&gt;
&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
--generate_master()&lt;br /&gt;
&lt;br /&gt;
--process_master()&lt;br /&gt;
&lt;br /&gt;
--print_result()&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function create_population()&lt;br /&gt;
	-- loop through each member in the population&lt;br /&gt;
	for count = 1, population_size do&lt;br /&gt;
		current_member = {}&lt;br /&gt;
		current_member._error = 99999&lt;br /&gt;
		&lt;br /&gt;
		-- generate the member&#039;s program data&lt;br /&gt;
		current_member.program = generate_program(initial_member_size)&lt;br /&gt;
		&lt;br /&gt;
		-- add this member to the population&lt;br /&gt;
		table.insert(population, current_member)&lt;br /&gt;
	end&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function get_best_candidate()&lt;br /&gt;
	local best_error = 99999&lt;br /&gt;
	local best_index = 0&lt;br /&gt;
&lt;br /&gt;
	for tIndex = 1, table.getn(candidates) do&lt;br /&gt;
		if candidates[tIndex]._error &amp;lt; best_error then&lt;br /&gt;
			best_index = tIndex&lt;br /&gt;
			best_error = candidates[tIndex]._error&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	--print(&amp;quot;candidate size: &amp;quot; .. table.getn(candidates) .. &amp;quot;\nbest error from candidates: &amp;quot; .. best_error)&lt;br /&gt;
&lt;br /&gt;
	-- remove and return the *best* candidate&lt;br /&gt;
	return table.remove(candidates, best_index)&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function mutate_child(origin, n_mutations)&lt;br /&gt;
	dest_node = deepcopy(origin)&lt;br /&gt;
&lt;br /&gt;
	for counter = 1, n_mutations do&lt;br /&gt;
		-- random point inside this child&lt;br /&gt;
		index_mutate = math.random(1, table.getn(dest_node.program))&lt;br /&gt;
		&lt;br /&gt;
		ranVal = math.random(1,table.getn(type_array))&lt;br /&gt;
		dest_node.program[index_mutate].name = type_array[ranVal].name&lt;br /&gt;
		-- beware hardcoded stuffs&lt;br /&gt;
		if dest_node.program[index_mutate].name == &amp;quot;real&amp;quot; then&lt;br /&gt;
			dest_node.program[index_mutate].value = math.random()&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	return dest_node&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
function crossover_parents(mommy, daddy)&lt;br /&gt;
	index_m = math.random(1, table.getn(mommy))&lt;br /&gt;
	--index_d = math.random(1, table.getn(daddy))&lt;br /&gt;
&lt;br /&gt;
	the_child = {}&lt;br /&gt;
&lt;br /&gt;
	-- add the first index_m elements of mommy to the_child&lt;br /&gt;
	for xxx = 1, index_m do&lt;br /&gt;
		table.insert(the_child, mommy[xxx])&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- add the elements index_d to #daddy of daddy to the_child&lt;br /&gt;
	for xxx = index_m+1, table.getn(daddy) do&lt;br /&gt;
		table.insert(the_child, daddy[xxx])&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	return the_child&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
-- initialize the population (1 program for each member in the population)&lt;br /&gt;
establish_functions()&lt;br /&gt;
establish_types()&lt;br /&gt;
population = {}&lt;br /&gt;
population_size = 10000&lt;br /&gt;
initial_member_size = 24&lt;br /&gt;
create_population()&lt;br /&gt;
&lt;br /&gt;
error_history = {}&lt;br /&gt;
error_threshhold = 0.016&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
max_num_iterations = 10000&lt;br /&gt;
current_iteration = 1&lt;br /&gt;
&lt;br /&gt;
-- while we haven&#039;t reached our error threshhold&lt;br /&gt;
while current_iteration &amp;lt;= max_num_iterations do&lt;br /&gt;
&lt;br /&gt;
	print(&amp;quot;iteration #&amp;quot; .. current_iteration)&lt;br /&gt;
&lt;br /&gt;
	--print_table(population, 2)&lt;br /&gt;
&lt;br /&gt;
	-- get the error for each of the members of our population&lt;br /&gt;
	for pcount = 1, population_size do&lt;br /&gt;
&lt;br /&gt;
		--print_table(population[pcount], 1)&lt;br /&gt;
&lt;br /&gt;
		-- initialize the error for this program&lt;br /&gt;
		population[pcount]._error = 0&lt;br /&gt;
		--print(&amp;quot;pcount = &amp;quot; .. pcount)&lt;br /&gt;
&lt;br /&gt;
		if (current_iteration == 2) then&lt;br /&gt;
			--print_table(population[pcount], 1)&lt;br /&gt;
		end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
			-- for each value of X&lt;br /&gt;
			for icount = 1, num_x_samples do&lt;br /&gt;
				-- establish the index to the current X/Y pair&lt;br /&gt;
				currentConstant = icount&lt;br /&gt;
&lt;br /&gt;
			--	print(&amp;quot;population size: &amp;quot; .. table.getn(population))&lt;br /&gt;
			--	print(&amp;quot;master_stack size: &amp;quot; .. table.getn(population[pcount].program))&lt;br /&gt;
&lt;br /&gt;
				-- make a copy of this guy&#039;s program&lt;br /&gt;
				master_stack = deepcopy(population[pcount].program)&lt;br /&gt;
&lt;br /&gt;
				-- initialize the stacks for each data type&lt;br /&gt;
				type_stack[&amp;quot;real&amp;quot;] = {}&lt;br /&gt;
				&lt;br /&gt;
				-- evaluate the program&lt;br /&gt;
				process_master()&lt;br /&gt;
&lt;br /&gt;
			--	print(&amp;quot;master_stack size: &amp;quot; .. table.getn(population[pcount].program))&lt;br /&gt;
&lt;br /&gt;
			--	print(&amp;quot;!!! - &amp;quot; .. table.getn(type_stack[&amp;quot;real&amp;quot;]))&lt;br /&gt;
&lt;br /&gt;
				the_result = grab_result()&lt;br /&gt;
&lt;br /&gt;
				--print(&amp;quot;the result = &amp;quot; .. the_result)&lt;br /&gt;
&lt;br /&gt;
				-- add the current error to the total error for this program&lt;br /&gt;
				population[pcount]._error = population[pcount]._error + math.abs(the_result - expected_result[currentConstant])&lt;br /&gt;
			end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- scan the population and find the lowest error&lt;br /&gt;
	total_error = 0&lt;br /&gt;
	best_error = 99999&lt;br /&gt;
	best_index = 0&lt;br /&gt;
	for pcount = 1, population_size do&lt;br /&gt;
		total_error = total_error + population[pcount]._error&lt;br /&gt;
		if (population[pcount]._error &amp;lt; best_error) then&lt;br /&gt;
			best_index = pcount&lt;br /&gt;
			best_error = population[pcount]._error&lt;br /&gt;
		end&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	average_error = total_error / population_size&lt;br /&gt;
&lt;br /&gt;
	print(&amp;quot;lowest error : &amp;quot; .. best_error)&lt;br /&gt;
	print(&amp;quot;average error: &amp;quot; .. average_error)&lt;br /&gt;
&lt;br /&gt;
	table.insert(error_history, best_error)&lt;br /&gt;
&lt;br /&gt;
	-- if the error is under our threshhold, break out and report success&lt;br /&gt;
	if (best_error &amp;lt; error_threshhold) then&lt;br /&gt;
		break&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	--=======================================================================&lt;br /&gt;
	-- evolution FTW&lt;br /&gt;
	--=======================================================================&lt;br /&gt;
&lt;br /&gt;
	children = {}&lt;br /&gt;
	child = {}&lt;br /&gt;
	candidates = deepcopy(population)&lt;br /&gt;
&lt;br /&gt;
	-- find the top 10% of the population, keep them&lt;br /&gt;
	tnum = math.ceil(table.getn(population) / 100)&lt;br /&gt;
	for cpop = 1, tnum do&lt;br /&gt;
		table.insert(children, get_best_candidate())&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	--sleep(10)&lt;br /&gt;
	-- the next 25% should be mutations of the top 10%&lt;br /&gt;
	xnum = tnum + math.floor(tnum * 10)&lt;br /&gt;
	candidates = deepcopy(children) -- reset candidates&lt;br /&gt;
	-- print(&amp;quot;we have &amp;quot; .. table.getn(candidates) .. &amp;quot; candidates to chose from&amp;quot;)&lt;br /&gt;
	for cpop = (tnum+1), xnum do&lt;br /&gt;
		child = mutate_child(candidates[math.random(1, table.getn(candidates))], 5)&lt;br /&gt;
		table.insert(children, deepcopy(child))&lt;br /&gt;
		-- print(&amp;quot;size of children: &amp;quot; .. table.getn(children))&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- the remainder should be crossovers of the full population&lt;br /&gt;
	for pcount = (xnum+1), population_size do&lt;br /&gt;
&lt;br /&gt;
		-- take 7 random values from the population&lt;br /&gt;
		candidates = {}&lt;br /&gt;
		child = {}&lt;br /&gt;
		for ccount = 1, 7 do&lt;br /&gt;
			table.insert(candidates, deepcopy(population[math.random(1, table.getn(population))]))&lt;br /&gt;
		end&lt;br /&gt;
&lt;br /&gt;
		-- print_table(candidates, 1)&lt;br /&gt;
&lt;br /&gt;
		mom = get_best_candidate()&lt;br /&gt;
		dad = get_best_candidate()&lt;br /&gt;
		child.program = crossover_parents(mom.program, dad.program)&lt;br /&gt;
&lt;br /&gt;
		-- write the child data to the list of children&lt;br /&gt;
		table.insert(children, child)&lt;br /&gt;
	end&lt;br /&gt;
&lt;br /&gt;
	-- move the new population to their proper home&lt;br /&gt;
	population = deepcopy(children)&lt;br /&gt;
&lt;br /&gt;
	current_iteration = current_iteration + 1&lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
print(&amp;quot;all done!&amp;quot;)&lt;br /&gt;
--print_table(population[best_index].program, 1)&lt;br /&gt;
print_program(population[best_index].program)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Omaha</name></author>
	</entry>
</feed>