On Gaming

Posted by admin 30 Dec 2015 at 20:26

I'm an avid puzzler; it's the most basic brain stretch I've yet found, and I gravitate towards the types of puzzles that force me into new forms. The Friday and Saturday NY Times Crosswords are a long-time favorite of mine; I also like cryptic crosswords and six-star sudoku (the ones that run on Saturdays, and that cannot usually be completed using standard techniques). Last Christmas my mother-in-law got me a subscription to Games Magazine; it gives me great joy that pencil-and-paper puzzle solving still exists and is readily available in the world. I love the tangibility of it.

Games also reviews computer games of all platforms; I read them in a generally disinterested way because I don't generally game in forms other than pencil and paper (I work on a computer so I don't want to play on one, also carpal tunnel, blah blah etc). Then two things happened: I got an i-Device through my work (I'm doing iOS development right now, another story), and Games wrote about a tablet game called Monument Valley three separate times. So I took advantage of the former to try out the latter. Monument Valley is a delightful game: it's beautiful, clever, fun to play, and (perhaps most importantly) not frustrating. It's not that the puzzles are easy, per se, it's perhaps more that solving them is straightforward. You fiddle with the levels and swivels on each level, and eventually a solution presents itself (and, in the case of this game, often delightfully so).

Monument Valley is, however, not the game I want to write about. After I was done, I googled "Games like Monument Valley," and that led me a game called Device 6. Playing Device 6 was probably the single greatest gaming experience I've ever had. It was atmospheric, narrative driven, used text in smart and innovative ways, and like Monument Valley (good job, internet!) I didn't get stuck--the puzzles all landed right in that space of not-to-easy, yet straightforward. I completed almost the whole thing on a plane ride. I've since bought the rest of the games by the studio (Simogo). The Year Walk uses similar storytelling apparatus as Device 6 while being absolutely completely different (and also genuinely scary), and if I hadn't played Device 6 right before it, I'd be calling it my best gaming experience. I have some complaints about The Sailor's Dream, but I also understand that it's doing a different thing, and it too is beautiful and narrative-driven. Anyway, in summary: Simogo are a bunch of fucking geniuses.

What have we learned? There's an entire class of computer game that exists that I wasn't aware of. It's low-risk (both in time and dollars invested) and high reward. The best examples of them are highly narrative driven. I note the following additional (positive) attributes:

  • Maximal use of AV (image, animation, audio) to create mood
  • Minimal use of device-specific capabilities (The Year Walk used several multi-touch things in its puzzles, and I don't know if I liked any of them)
  • Minimal backtracking
  • Straightforward puzzles. Solutions are found in your environment
  • Narrative driven but not character driven?
There's a meta-lesson here, too. About discovery, maybe, and also about the device (which is an iPad Mini)? The discovery of a studio making a specific kind of art for the specific device, and the means by which I did it? The following of threads? It's not coming to me right now.

Tags , , , , no comments

Observe/Orient/Decide

Posted by admin 23 Dec 2015 at 19:37

A few weeks ago, someone in my Feed alerted me to the existence of John Boyd and the OODA loop--Observe/Orient/Decide/Act. I read an article on this website, which, you will observe, is called The Art of Manliness. That title gave me an immediate (negative) impression of its general content, but then I read the article and the article was smart and went into some interesting detail about Boyd himself. So I learned something. As per usual, it wasn't immediately clear what that something was.

I've been living full-bore in the modern world for a little while now. I don't know exactly when I started, or why I did start; I didn't really notice until after I was actually doing it. Also, it's probably not at all clear what I mean by "living full-bore in the modern world." I mean it in the Future Shock sense: the world is changing incredibly quickly, and I'm actively trying to keep up with current thought and current knowledge. As a daily practice, it's an absolute chore. My brain, at my age, would be happier settling into patterns and routines, practicing things it already knows how to do. Not doing that is really hard.

If I could state the grand thesis of Boyd and OODA, I'd say that it's the rejection of Dogma and Ideology based on the empirical observation that the only constant in our universe is change. For Boyd--a military strategist--it was probably based on the observation that generals were always fighting the last war. Or if you code (as I do) you can't help but notice that there's a new language to code in every year, that there's a new framework that's faster and better and more effective every few months, that there's a new philosophy of data abstraction that blows the old one away every week. I've been working as an indie developer for three years now and I have yet to use the same technology twice. It's a hard road for the brain to travel. It's probably also necessary.

I also can't help but notice that the Rejection of Dogma is also a Dogma. Weirdly, I notice that the solution to that particular seeming paradox is not to also reject that, it's to accept some dogma some of the time. I think there might be something there--as per usual, it's not immediately clear what that something is.

no comments

Using AutoHarp and a Character-Based RNN to Create MIDI Drum Loops

Posted by admin 16 Sep 2015 at 18:39

This whitepaper details a method of generating new MIDI drum loops using a combination of AutoHarp — an algorithmic music playing software package created by the author — and Andrej Karpathy's character-based Recurrent Neural Network, char-rnn. It is a herculean task to track all of the inputs and prior art that led to this experiment and informed its construction (the history of algorithmic music is, at this point, vast), but a few specific experiments led me to begin investigating char-rnn as a fruitful source for progress in the arena:

AutoHarp is an algorithmic song generator. It differs from most other applications in the space in that its output is fully rendered, multi-track MIDI files with recognizable instruments and repeating sections, constructed as a human popular song composer might construct his or her songs. The author uses these files as the basis for his music, as documented throughout this site. While the output of AutoHarp is impressively varied, and the construction of an individual song involves hundreds of thousand of discrete "decisions," it is also limited: the Markov Chains that generate melodies, song structures, chord progressions, and improvisations by the members of the "band" are literally hard-coded into the program itself. To transcend this, to allow the machine to become more literally creative, deep learning is required.

This experiment focused on AutoHarp's drummer. In the open-source version of AutoHarp, you "seed" it with existing MIDI drum loops (a plethora of free and royalty-free MIDI loops are available online for the Googling); thereafter it plays in particular musical genres by selecting a loop or loops you have tagged in that genre during the seeding process and modifying them slightly as the music requires (e.g. repeating loops or truncating them, adding simple fills at the end of phrases, switching one type of drum to another to add variance).

The very first attempt was merely to print out the MIDI notes of all the drum loops in my AutoHarp library as text representations of their MIDI data. This was done without regard for differences in tempo, genre, or meter (a.k.a. time signature) among the loops themselves. As it is easier to manage and manipulate in code, I did switch MIDI note_on/note_off events, which use relative time, to using a single quasi-MIDI "note" event which uses absolute time and a note duration. I also used the MIDI utilities of AutoHarp to break each loop into one-bar sections and re-zeroed the absolute time of each bar. A section of the input file appears below:

START LOOP
TEMPO 112
note,0,31,9,36,98
note,0,35,9,51,80
note,105,37,9,51,69
note,167,37,9,51,56
note,231,36,9,53,119
note,349,37,9,51,70
...
note,1434,6,9,51,85
END LOOP
START LOOP
TEMPO 123
note,0,30,9,36,100
note,0,31,9,51,85*
...

*The "note" quasi-event is of the format: 'note',absolute time,pitch,channel,duration,velocity. In Standard MIDI percussion, pitches have set associated drums; in my library, e.g., 35 == kick, 38 == snare, 42 == closed hi-hat, etc.

This file was 1.2 MB — according to the documentation, a small sample size. Nonetheless, the initial results — run with default settings for char-rnn — were somewhat promising. Example output from a mid-epoch checkpoint looked like:

START LOOP
TEMPO 93
note,0,43,9,43,111
note,143,30,9,42,84
note,718,24,9,36,94
note,455,30,9,48,115
note,138,30,9,36,116
note,248,30,9,36,92
...
note,9,36,9,38,114
note,156,30,9,42,117
note,482,41,9,42,73
...
note,486,30,9,38,115
END LOOP
START LOOP
TEMPO 150
note,0,30,9,49,111
...
note,395,42,9,51,95
END LOO
TART LOOP
TEMPO 80
note,0,30,9,38,112
note,148,30,9,36,68
note,727,30,9,36,101
...
END LOOP
START LOOP
TEMPO 100
note,0,34,9,36,101
note,0,30,9,9,38,104
note,607,30,9,42,108
note,48,30
note,844,30,9,38,128
...
note,948,41,9,44,85
END LLOPP
...

I used AutoHarp's built-in text utilities to harvest loops, using all valid lines between Start and End markers. Converting those back to MIDI and looping the results over four bars produced output such as this:



Interesting — nothing to make Gene Krupa quake in his...um...grave — but interesting, and certainly indicative that the approach (of converting MIDI to text, building a text learning model, and then harvesting the resulting text and converting it back to MIDI) might be promising.

However, from there, it was initially difficult to make much forward progress. I did succeed, by using the same method, but limiting the input to only 4/4 loops, in getting examples of fairly rhythmic one-bar loops (here again looped to 4 bars):


Attempting to get the machine to string together its own, longer, drum lines using this method were, however, unsuccessful. I tried a variety of changes to the inputs — for instance, in one iteration I used only the Rock loops (i.e. straight-ahead 4/4 beats with kicks on 1 and 3, snares on 2 and 4); in several I used only the 4-measure phrases as input; I fed them through AutoHarp first to simplify the beats where possible. All outputs in this set of iterations were thematically similar to this:

Kind of like a kid who just got his drum set, and can keep things going for a couple of beats, falls apart, gets it back, and then falls apart again.

It was at this point that I had a kind of pedagogical revelation: the drum loops I was using as input were too advanced. They were recorded on MIDI drum kits by professional human players who had, presumably, years of practice to develop groove and feel. Here again is one bar of a sample input loop (from a later iteration where I had changed the format — the data is still the same):

[note      0   40  9          Bass Drum 1 127]
[note      0   42  9         Pedal Hi-Hat 127]
[note      0   41  9       Crash Cymbal 1 127]
[note    102   41  9         Pedal Hi-Hat 127]
[note    106   42  9            Ride Bell 127]
[note    181   43  9       Acoustic Snare  98]
[note    183   44  9          Bass Drum 1 116]
[note    222   41  9         Pedal Hi-Hat 127]
[note    224   42  9        Ride Cymbal 1  69]
[note    349   42  9         Pedal Hi-Hat 127]
[note    353   41  9          Bass Drum 1 125]
[note    356   42  9            Ride Bell 127]
[note    361   41  9       Acoustic Snare 117]
[note    444   42  9       Acoustic Snare  24]
[note    472   42  9         Pedal Hi-Hat 127]
[note    479   42  9          Bass Drum 1 127]
[note    481   41  9        Ride Cymbal 1  66]
[note    555   43  9       Acoustic Snare  31]
[note    595   42  9         Pedal Hi-Hat 127]
[note    595   42  9            Ride Bell 127]
[note    666   43  9          Bass Drum 1 122]
[note    672   42  9       Acoustic Snare 120]
[note    709   42  9         Pedal Hi-Hat 127]
[note    711   41  9        Ride Cymbal 1  67]
[note    833   42  9         Pedal Hi-Hat 127]
[note    835   42  9          Bass Drum 1 127]
[note    836   42  9            Ride Bell 127]
[note    837   42  9       Acoustic Snare 119]
[note    914   42  9       Acoustic Snare  24]
[note    954   42  9         Pedal Hi-Hat 127]
[note    954   42  9        Ride Cymbal 1  66]
[note    955   42  9          Bass Drum 1 126]

AutoHarp uses 240 MIDI ticks per beat, so a drum loop that was rigidly on the beat would have time values like 120, 240, 480, etc. This loop, and indeed all of the input loops, isn't robotically on the beat. Human drummers, especially good ones, play in a groove: they slightly anticipate or lag behind some or all of the beats in a measure. Here's what the above loop sounds like, fyi:


From Groove Monkee's free MIDI drum loops package

This is like trying to teach someone the drums by transcribing a Stewart Copeland line down to a resolution of 960th notes and expecting her to learn an expert drummer's rhythm, groove, and feel from it.

At this point, I switched file formats to a simple text drum notation that I'd created in AutoHarp to visualize drum loops. It looks like this:

36,|8.7.......8.....|8.8.......7.....|8.8.......78....|7.8.......7..7..|
38,|....9..3.4..9..3|....9..3.4..8..4|....9..3.4..9..3|....8.82........|
42,|....4.5.5.5.7.5.|6...7.5.6.6.6.5.|5...3.6.7.5.7.4.|7...7.6.5.......|
44,|....7...........|....6...........|....7...........|................|
46,|..7.............|..7.............|..7.............|..7.............|
49,|8...............|................|................|................|   

Each line represents a drum, and each character within the bars represents a 16th note. A dot is no drum hit, a number represents how hard the drum is hit with 0 being the lightest and 9 being the hardest. In addition to forcing the hits of the loop into a grid, I also bootstrapped the dataset by creating a series of virtual song sections of 4 bars apiece, and letting AutoHarp's drummer play them (via the method previously described using altered MIDI loops) and running it 5000 times. This created a 2 MB dataset, which I fed to the char-rnn train module. It took a few epochs before it got the hang of the format, but after that, well...here's a random sample of the results (now with a richer drum sound font):

The trailing off is one of the ways AutoHarp will end a phrase in order to bring the energy of the song down. It has learned well.

Straight ahead. Is that last little hi-hat flutter a screw up, or did you mean to do that?

It's learned to do little variances at the end of phrases — here, e.g., the open hi-hats.

If it weren't so perfectly on the beat, this would be indistinguishable from a human player

Yeah, this one's a little wonky; it's also undeniably rhythmically interesting.

I didn't cherry pick those. I have a script that runs through char-rnn's checkpoint files (on this run I was writing them every 25 iterations, so I have a lot of data for this single experiment), harvests the results, and writes them to MIDI files; I then sample them randomly, and the five above came out of that random sampling one right after another. There are certainly loops in the set that aren't quite passing as drum grooves (that fifth one is probably one of them), but there aren't very many of them. This just seems to me to be a stunning result; I tried a very difficult pedagogy and it more or less failed. I simplified the pedagogy, as you might when teaching a human to do something, and it succeeded.

I've since written code into the experimental branch of AutoHarp that can use existing, human played drum loops as a "groove template" to make the loops that come out of this model feel more natural — that is, it will move notes slightly off the beat and give them more nuanced note velocities so as to give the loops groove and feel. And I also hope that as I develop the model, this will prove unnecessary. As it is when a human learns music, it seems that the basic rhythms have to be taught first. Actually "feeling" the music — making it musical — is a study that lasts a lifetime. Or that's how it is for a human, anyway — in the case of char-rnn, it might take just a few more epochs.

Tags , , , , , 1 comment

Computer-Assisted Music Creation using AutoHarp: GENERATION 1

Posted by admin 14 Sep 2015 at 21:47

This whitepaper explores a process of creating music, specifically short (3 to 4 minute) pop songs, using an algorithmic music generation software package called AutoHarp. It details the process of creating The Inquisitivists' first musical release, the three-song Lost on this Island e.p., which will be linked here for download when it is available. Additional sound and data files generated by the program are embedded below as well. This is "Generation 1" of a long-term art and science project exploring the nature of creativity and machine intelligence. We expect this process to evolve and change as AutoHarp becomes more advanced, and as the field of machine creativity evolves in general. In this iteration, a human was heavily involved (e.g. as lyricist, singer, and producer). As the technology improves, we expect that involvement to change; indeed, the changing relationship between man and machine is the theme that drives the art created by this project.

AutoHarp is a suite of algorithmic music generation tools created by the author, currently in active development (c.f. other whitepapers concerning machine learning and neural networks on this site). The version that was used to create the songs on the Lost on this Island e.p. is documented and available for download as the mainline branch of this project on GitHub. It outputs music in the form of multitrack MIDI files (using MIDI format 1), with appropriate MIDI patches selected and assigned to each track (a bass patch for the bass part, pianos, organs, or strings for the rhythm instrument, drums for percussion, etc).

Of the three songs on the e.p., two were generated via an iterative process described below. The third, "Falling for the World," was created by taking the following 8 bars that were generated by AutoHarp

and looping them, via cut and paste, in a digital audio workstation (Apple's Logic). For this composition in particular, the 8 bars above were the extent of the machine's tangible contribution to the song. From an artistic point of view (and probably also a legal one), this isn't true: it is recognizably the crux of the final song, and the song's lyrics tell the story of art generated by a machine. It is both foundation and inspiration. The debate of the machine's true role in creating this song, while absolutely within the purview of the project, is, however, outside the scope of this paper.

The other two songs were created via the process that follows. Their foundational MIDI files were generated (along with somewhere between 250 and 500 others) to create an album for the 2015 RPM Challenge, an event in which the author regularly participates (click the link for more information). The author ran AutoHarp's "generate" utility (which creates a new song and outputs a MIDI and a data file) over and over again, listening to each composition for as long as it held his interest, marking ones that were notable to him in some way and passing over or deleting ones that he deemed unworthy. This relationship was very like a music producer with access to a composer who wrote new music for the asking, never grew tired, and was never offended when the producer rejected its songs. Here are some example snippets of generated pieces that were saved, but for one reason or another were not developed any further:



From this process, the pool was winnowed down to a handful of songs (completing the RPM Challenge requires ten songs, the author selected 12 at this point) which were workshopped. As a human song writer will take a song that has a solid musical foundation and rewrite the bridge, repeat the chorus after the second verse, or add a prechorus, AutoHarp can do the same via the "regenerate" utility. The "generate" utility produces a data file along with the MIDI; "regenerate" takes that file as input and creates a new song. Deleting elements out of the file before feeding it back in causes AutoHarp to regenerate those parts, and changing chords or melodies manually causes AutoHarp to use those instead. "This Cosmic Place," which appears on the e.p., is the only song of the 12 that didn't go through this process--its structure, progression, and instrumentation remain as the machine originally wrote it (Thief of the Daylight, by contrast, went through 10 different bridges before the author "suggested" his own set of chord changes and a new song structure with the bridge played twice, and had AutoHarp iterate on that structure).

At this point the resulting MIDI files were imported into Logic and assigned more fully rendered instrument patches (either built in to Logic or from third parties). At this point the machine's job was done--it had composed and played what it was going to play. The author wrote lyrics, sang them, and added guitar parts. Some extremely game musicians known to the author added additional vocals and guitar (thank you Jennie and Andy). This is also the stage at which the author, as a music producer might, went in to tweak individual notes. An example of that from "Thief of the Daylight" appears below:

(AutoHarp's output) (the same section, produced)

Note how the machine's clav track repeats the same motif every measure, whereas the produced version has had its last four notes altered slightly on the second and fourth measures of the phrase.

After completing the RPM Challenge, the author did some polishing and then went and solicited some musical opinions to determine what should be on the final release (thanks everyone, you know who you are) some of whom knew the nature of the project and some of whom didn't. The result is the three songs that appear on the e.p. We hope you enjoy it and feel inspired to evangelize it a little. Links to the original MIDI files for each of the three songs, and final AutoHarp data files that will generate them appear below.

Tags , , no comments

Running Nginx and Apache Side by Side on a Single EC2 Linux Instance

Posted by admin 19 Aug 2015 at 21:28

Given that one of the focuses of this website is technology, and I've just spent the morning on a technological struggle in order to bring said website into being, I thought I'd spend a little time documenting that struggle. Also, in my pursuit of knowledge about the world, I've found articles like this to be incredibly helpful. We live in a world in which, if you have a problem, it is highly likely that both

  1. Someone else has had that problem, and
  2. They solved it and documented the solution

So here is the problem and its solution, documented for posterity. I hope you have arrived here via The Google in search of knowledge and that this article helps you. And, by the way, if this is a method that you use in general to solve problems, congratulations on your highly evolved state of being, and you might also be interested in the rest of the stuff we do around here. ANYWAY...

I wanted to run an Nginx site and an Apache site side by side on the same AWS (that's Amazon Web Services) EC2 Linux micro instance. My wife is a novelist, and I run her website. A t1.micro with a little extra swap space configured can easily handle all of the traffic we garner, and I didn't want to pay the extra $18/month for a second server. www.inquisitivists.com runs on Rails, her website is Perl CGI (and now the more clever amongst you know how old I am).

  1. Deploy the Rails website via Elastic Beanstalk
  2. I launched a version of the 64-bit Amazon Linux AMI with the relevant software already installed. You don't have to do this, but it got me up and running quickly. I'm assuming if you're reading this, you know how to deploy a Rails app, so how you do it isn't really that big a concern. At the end of step one, you should have a rails app running on Nginx on an EC2 instance. You should be able to ssh into that instance, and you should be able to hit the website via your browser, either via the Beanstalk URL, or using the actual address of the EC2 instance. Also, either provision an Elastic IP, or know the IP address of your instance. You'll need that to set up your DNS entries.
  3. Get Nginx to let go of port 80
  4. If you aren't deploying via Beanstalk, this is easy: ssh into your instance, and sudo edit /etc/nginx/nginx.conf (this assumes the Amazon Linux AMI. If your nginx config file isn't there, you can run sudo nginx -t to get it to tell you where it is). Find the line that says
    listen 80;
    and change it to
    listen 1080;
    Reload the config by typing:
    sudo nginx -s reload
    If you are deploying via Beanstalk, (as I did) this is somewhat more complicated:. First, in the nginx.conf file above, comment out the entire "server" directive. You won't need it. Second, Beanstalk is a super handy deployment tool. But it wants to do things the way it wants to do them, and one of the things it wants to do is serve content on Port 80. Since you want it to serve on a different port, you have to undo that step of the deployment each time. Here's how I did that:
    • I located the Nginx virtual host file that Beanstalk uses.
    • It was symbolically linked in /etc/nginx/conf.d to /opt/elasticbeanstalk/support/conf/webapp_healthd.conf. From my research, it seems like this file and its location changes when Beanstalk deploys a new version of its core software. So your mileage may vary here.
    • I sudo edited that file and again replaced port 80 with 1080.
    • I wrote a perl script that does the same thing: it writes a new version of that file, replacing the listener on port 80 with port 1080
    • I added that script to the .ebextensions/ folder at the root level of the application that I'm deploying (this is on my local machine now) as "01rewrite_nginx_config"
    • I added a beanstalk config file named "01mod_nginx.config" into the same .ebextensions/ folder. It copies this script into the Beanstalk deployment hooks directory and makes it executable.
    • container_commands:
        copy:
          command: "cp .ebextensions/01rewrite_nginx_config /opt/elasticbeanstalk/hooks/appdeploy/enact/"
        make_exe:
          command: "chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/01rewrite_nginx_config"
      
    • I deployed that content via the Beanstalk CLI
    • This is a slightly different method than any Beanstalk documentation will tell you to use to modify your deployment; again, this is because you're doing something somewhat non-standard.
  5. Set up your Apache site
  6. The way to set up an and serve an Apache site is documented, like, everywhere in the world so go ahead and find those instructions and do that, or whatever. One important bit: make sure your Apache has mod_proxy installed. At the end of it, you should be able to start Apache, and since Ngnix is no longer listening on port 80, it should start right up. You should now be able to go to your site in a browser and hit the Apache-served site.
  7. Now a Miracle Occurs™ (If you're skimming this article looking for the solution to your problem, STOP HERE: THIS IS IT)
  8. We're going to use Apache as both a proxy and a web server. The way we're going to do that is this. sudo edit /etc/httpd/conf/httpd.conf. You will need to do and/or check several things:
    • Make sure you are loading all of the mod_proxy modules. You should have a bunch of lines that look like this:
      LoadModule proxy_module modules/mod_proxy.so
      LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
      LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
      LoadModule proxy_http_module modules/mod_proxy_http.so
      LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
      LoadModule proxy_connect_module modules/mod_proxy_connect.so
      
    • Add a listener on port 2080. This will be your actual Apache website. Port 80 is going to become your proxy.
      Listen 80
      Listen 2080
      

      (That "Listen 80" line should have already been there)
    • Add virtual servers for each of your sites via port 80. You can copy and paste the code below, mutatis mutandis:
      NameVirtualHost *:80
      <VirtualHost *:80>
          ServerName [the domain of your Apache Site].com
          ServerAlias www.[the domain of your Apache Site].com
      
          ProxyRequests Off
          <Proxy *>
              Order deny,allow
              Deny from all
              Allow from 192.168.0
          </Proxy>
      
          ProxyPass / http://localhost:2080/
          <Location />
              Order allow,deny
              Allow from all
          </Location>
          ErrorLog /var/log/[yourapachesite]-error.log
          CustomLog /var/log/[yourapachesite]-access.log common
      </VirtualHost>
      <VirtualHost *:80>
          ServerName [Your Nginx Site].com
          ServerAlias www.[Your Nginx Site].com
      
          ProxyRequests Off
          <Proxy *>
              Order deny,allow
              Deny from all
              Allow from 192.168.0
          </Proxy>
      
          ProxyPass / http://localhost:1080/
          <Location />
              Order allow,deny
              Allow from all
          </Location>
      </VirtualHost>       
      

      What you've done here is told Apache that port 80 is a proxy server; it proxies requests for your Nginx domain to port 1080 of the localhost (your EC2 instance), where the Nginx is waiting to serve that site. It proxies requests to your Apache domain to port 2080, where the same Apache is waiting to serve that site. Note that the server in the ProxyPass line is literally "localhost." You're just telling Apache to proxy the local server to get content to serve back to the user. You may or may not be tempted to fill in your actual domain with the port number attached. Like, don't do this: first off it won't work because you haven't opened those ports via EC2 Security groups, and if you do do that, Apache will get really freaking confused.

  9. Point both Domains to your EC2 IP Address
  10. Do that. If you're reading this article, you probably know how to do that. If you don't know how to do that, find the instructions from your specific domain registrar about how to that. Note only that whatever those domains are have to match exactly with the domain names configured as virtual servers in the step above, and if you are going to configure any other subdomains, those need to be represented as server aliases so that Apache knows where to send them when they come in.
  11. Profit
  12. Once the DNS records propagate, you should be up and running.
Did you this help you? Let me know if so, or tell me the error you ran into in the comments and I will attempt to assist you further.

Tags , , , , , no comments

Once We Made A Video...

Posted by admin 19 Aug 2015 at 18:04

It took us a long time. It is surprisingly hard to explicate a coherent thesis in a seven minute film.

But it turned out pretty well, and we're proud of it. So here it is.

The Inquisitivists - Episode One: "Aha!" from The Inquisitivists on Vimeo.

Tags , , , , , no comments