The Dev Pages

A knowledge base for web applications development (and beyond)

Archive for the ‘General Dev’ Category

10:36:54.086 [error] Postgrex.Protocol (#PID<0.522.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect ( connection refused - :econnrefused

psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

Even if brew services restart postgres is successful, there may be an error occurring, view your log, which for me with homebrew’s install was tail -n 50 /usr/local/var/log/postgres.log. If you see something like the following, you may need to switch a package version back to something older:

dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.64.dylib
Referenced from: /usr/local/opt/postgresql/bin/postgres
Reason: image not found

For me, after an OS update, the icu4c package had been changed, so I needed to switch it back, using brew switch icu4c 64.2

 1) feature does X (BaseApp.UIWeb.Test.Acceptance.TheTest)
     ** (RuntimeError) invalid session id
     code: |> session_login()
       (wallaby) lib/wallaby/httpclient.ex:136: Wallaby.HTTPClient.check_for_response_errors/1
       (wallaby) lib/wallaby/httpclient.ex:56: Wallaby.HTTPClient.make_request/5
       (wallaby) lib/wallaby/webdriver_client.ex:254: Wallaby.WebdriverClient.visit/2
       (wallaby) lib/wallaby/driver/log_checker.ex:6: Wallaby.Driver.LogChecker.check_logs!/2
       (wallaby) lib/wallaby/browser.ex:963: Wallaby.Browser.visit/2
       (ui) test/support/acceptance/rivendell_wallaby_session.ex:73: Rivendell.UIWeb.Test.Support.Acceptance.RivendellWallabySession.session_login/1

The solution for this was to run: brew cask upgrade chromedriver and possibly to update your chrome browser. The comments on this issue/repo were very helpful:

TL/DR: apt-key adv --refresh-keys --keyserver (use with caution) may go a long ways in getting your build to work.

Some noteworthy things after learning about the interplay between codeship, docker, ubuntu apt-get, and the yarn package. This resolution may help with issues dealing with:

  • Invalid signatures with the yarn ubuntu package.
  • Errors running apt-get update
  • Errors updating the Dockerfile for a CodeShip build
  • Dealing with CodeShip build steps updates and cached steps

For now, I’m noting my crazy sequence of realizations in reverse order

A failed step on the server with CodeShip may not be very enlightening. If you see an apt-get install -y <some-package> returned a non-zero code: 100 error, you may need to run the step locally to see the full error using jet steps.

If you see the above error, it may be due to a command in your steps if the RUN command in the Dockerfile includes apt-get update -y

The apt-get update may be failing because a package includes an invalid key. This may be the yarn package, spitting out GPG error: stable InRelease: The following signatures were invalid: EXPKEYSIG XXXXXXXX Yarn Packaging <>. If you add apt-key adv --refresh-keys --keyserver this may resolve that issue. See

You may see a codeship error like:
2 errors occurred:
* (step: dependencies_X-deps) error loading services during run step: failure to build Image{ name: "static.X", dockerfile: "/<project-path>/docker/app/Dockerfile", cache: true }: The command '/bin/sh -c apt-get update -y   && wget   && apt-get install -y ./google-chrome-stable_beta_amd64.deb   && rm google-chrome-stable_beta_amd64.deb   && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

With codeship steps, if you edit the Dockerfile RUN command, and it is equivalent to a previous version of that RUN command, that command will be cached and may not trigger an error you expect. In this case it was more painful when trying to add a separate and new RUN command with apt-get install -y libxss1 while updating the build to have a chrome install with the libXss dependency included.

The ubuntu stable package may no longer work for you if you are using puppeteer in conjunction with a headless chrome browser to generate screen captures/pdfs on a server. It can be a bit painful to locate a list of package versions for chrome on ubuntu, and sticking with stable and adding a apt-get install -y libxss1 to your RUN command is one way to go. This may stem from seeing something like:
/app/pdf-thing/node_modules/puppeteer/.local-chromium/linux-XXXXX/chrome-linux/chrome: error while loading shared libraries: cannot open shared object file: No such file or directory

I wanted to offer some more clarity on converting a hard drive to the AFPS format, for upgrading from Mojave to Catalina, or something similar. I could not upgrade without doing so. I was surprised at how technical the process was, but I think it’s manageable for most people. Though terrifying to have to adjust boot settings (back in my windows days this was a non-recoverable screwup at times). After a successful conversion to AFPS, I could not boot out of recovery mode and would get the “Running bless to place boot files failed” error.

First convert the Hard Drive

  1. Any backing up, I would make sure to do before converting. I put that off, and ended up being fairly nervous when my OS would no longer boot 🙂 Luckily, I ended up back with the same OS with a converted hard drive and proceeded to upgrade.
  2. For me, I had to boot in recovery mode to get the option I needed. Restart your Mac and hold down Cmd+R to boot into recover mode.
  3. Launch the Disk Utility app from the options.
  4. Select your hard drive and then its main volume, then unmount it (I had to do this to get the convert option to be enabled).
  5. Select Edit then Convert to APFS
  6. It should convert after about two minutes and you can reboot.

At this point, I restarted and found my Mac would always boot into recovery mode. When I selected the main hard drive and tried to restart from that, I got an error, indicating “Running bless to place boot files failed.” There were some helpful sites, but the instructions didn’t quite match up for me, and the more technical part of creating a folder with the hard drive’s unique id was not very clear. The following are the steps I took, but see: and (You may be able to do this in a shorter number of steps than me)

  1. In recovery mode open the terminal from the menu Utilities -> Terminal.
  2. List the hard drive info with diskutil apfs list
  3. If you’re like me you already have a preboot volume. But apparently you may need to create one with diskutil apfs addVolume disk"Disk Number here" apfs Preboot -role B
  4. The steps in the articles I was seeing got less clear to me here. Props to the existing info, but I wanted to add my two cents. At this point I needed to copy existing preboot files into another preboot folder. Before I could use a copy command, I had to create a new preboot folder with the UUID of the hard drive volume disk2s1 (or whatever your main apfs partition is listed as in the diskutil afps list command. UPDATE: The UUID to use should be the that of the primary partition (and not the pre-boot partition) – thanks to opherko. The command I needed was: mkdir -p /Volumes/Preboot/<The UUID, a long alphanumeric id>/System/Library/CoreServices
  5. At that point I could do cp -RP /Volumes/<Your hard drive name here>/System/Library/CoreServices /Volumes/Preboot/<The UUID>/System/Library/CoreServices. It would seem to me you are copying your old, functioning boot files into a new boot folder for your converted hard drive. Perhaps temporarily until another process adjusts things. Make sure, as noted above, that the UUID is the primary partition’s UUID.
  6. Now you can run the update commands for the preboot. Replace disk2s1 with your disk: diskutil apfs updatepreboot disk2s1
  7. Run the bless utility: bless --folder /Volumes/<Your hard drive name here>/System/Library/CoreServices --bootefi --verbose
  8. You may need to use the Startup program to assign the boot partition when exiting the utilities.
  9. At this point I was very relieved that upon trying to reboot, you should no longer get the error “Running bless to place boot files failed.” and the OS should start. For me, I was back in Mojave, with all my same files on a hard drive that was now formatted for AFPS, where I could proceed to install Catalina. Hopefully this informs a successful end result for someone else out there.

Protected: Miscellaneous thoughts

Posted on March - 17 - 2019

This content is password protected. To view it please enter your password below:

How and why software development

Posted on March - 21 - 2017

I was thinking I should create some reusable language. Like a good software engineer. The topic of software development comes up somewhat frequently with people, at least as to explaining what I’m doing with my career at any given time. People tend to wonder how I’ve managed to find gainful employment between my various educational and career-changing endeavors. There’s certainly a component of luck to that. I’ve also been asked several times if I would recommend software development as a career generally.

Working in software has been a love-hate relationship. In a nutshell, I love the flexibility it has given me and the constant intellectual challenge and the opportunity for creativity and self-autonomy. But I often struggle with the challenge of being at a screen so long without human interaction. But good projects and associates are collaborative and I have learned a ton from some amazing people. All in all I’d consider my career path lucky. I hope it continues to be that way for the future.

My ultimate advice is that anyone should try coding, if one is at all curious. If you like it for the sake of coding, stick with it. It’s like a switch within, that is hard to turn off, once ignited. It’s a crossroads that comes about relatively quickly. If it’s a legitimately miserable experience and you can avoid being an abused commodity, then move on to something less soul-sucking. There are many redeeming  jobs and characteristics that are likely correlated with someone not being a good coder.

How to to figure out if the switch will click is harder to say. I’m not sure exactly how or when it happened for me. Here’s my best guess: find something code-related and legitimate to figure out. After getting something like that successfully setup, evaluate the suffering and pleasure you’ve experienced. Did you enjoy it, intrinsically? If it was like a good workout or a painful massage or a worthwhile hard read, something somewhat masochistic but ultimately enjoyable, then you may be ok. If it was like watching CNN or reading a lot of opinion-based facebook posts or calling a cable/satellite provider or waiting to take photographs, something harrowing, then move on.

It’s really not about understanding the weird syntax of code. But you do need to be figuring out what some of that language and structure is achieving at some early point. Getting something working is the first step and then caring about understanding HOW it’s done so you can effectively re-create something similar is also very important. So maybe one needs to try something out, and then do a variation on it while forcing yourself to understand how to accomplish that variation from the principles you learned in the first task. I’m always curious if good coders would always know how to change their car’s oil, or just be curious DIY project people all around. I suppose there is often a presence of personality-specific abilities to hyper-focus, analyze, and maybe be introverted (for better or for worse).

I would say a good place to start is to figure out how to buy a domain, setup web hosting, and get an html page loaded with some decent styling and an image carousel loaded using a javascript library like jQuery. Setting up a blog doesn’t count.

Another one might be getting a web page to show some ugly text, but to generate the text from a database. Or print your favorite playlist from your spotify or account using an API data source.

Another one might be getting a mobile app setup and loading on a test environment on your actual phone and have a menu with legitimate options that loads as a good starting point.

If you like the problem solving aspect or if you like to work on your own on technical things you may be well suited for coding. The best and the brightest are also good collaborators.

You will be using w3schools, stackoverflow, and google a ton to get started. And once you figure some stuff out, look into finding a project at work where you can apply code if at all possible. Or save up money or get someone who supports you to fund code academy classes or a start or return to a university. I’m skeptical about most for-profit institutions.

I know a couple of individuals where both tried coding. One took a class in college and the other came back to it after college and did a code academy. The one who took the college class at the outset never got to enjoy it and the switch never clicked. I’ve met plenty of people who tried code academies to the same end. The difference for the other friend was that they just somehow enjoyed it and desired it and got to the point where stuff started clicking. It’s always been the same way with so many people I’ve met. Either they’ve naturally had a fierce desire and possibly a natural aptitude, or they’ve taken a crack and never really had anything click. There’s just some fierce dichotomy that exists out there. I’m still trying to figure it out. Maybe it’s something as simple as being willing to google and pound your head against the wall to get a semi-colon in the right place and finally understand how something works to great satisfaction. Who knows? But figuring out if you can flip the coding switch is something I would recommend to anyone who is at all curious.

My roots in technology and coding that could possibly explain/demonstrate my particular curiosities:

  1. At the age of 10 I hammered on an old-school IBM until I figured out how to draw lines in a DOS operating system based program called Harvard Graphics.
  2. At the age of 12 I learned how to boot into the DOS operating system so I could load a computer game from a floppy disk, Joust VGA, of a disk and play it on my own
  3. At the age of 14 I learned how to use text commands to navigate BBS (Bulletin Board Service) portals, which were like a dial-up text based website you could navigate from the command line or a command-line like tool for BBS-ing. I wanted to download music. I still remember illegally downloading my first song, Cream’s White Room. (1)
  4. At the age of 17 I took a C++ class in high school and had fun with it.
  5. When I was 18 in college my Object-Oriented Computer Science class in Java seemed like cake after learning some C++. I think this was when I realized the switch had clicked.
  6. When I was 22 I learned to build html documents and use javascript.
  7. When I was 23 I convinced one of the world’s greatest bosses to let me learn Coldfusion, a server side language like php or python (well, a little bit weirder). I stayed up for a couple nights late, and I came back for a second interview proving I could query data from a database using the stuff and since then I’ve always had jobs in development, learning things as I went from my Information Systems program to some extent, but largely from the jobs I’ve had and opportunities to learn new technologies.

I don’t know what any of that proves. I think a common theme is giving a shit about mysterious details and figuring them out to achieve something interesting. As I’ve gotten older and money became involved I didn’t necessarily need the gratification of a childish drawing (though that is the end product of some recent projects) or a video game or a song. But I still really enjoy knowing how technical stuff works, to the point of knowing some stuff about fixing my car and creating music with an instrument and understanding how areas of finance and law and other dense subjects work. So maybe a part of it is being a hedonistic control freak. I think those character traits can also be very rewarding.

1 – TODO:NW middle school aol password phishers and sketchy affiliates and the woes of not figuring out internet browser temporary file storage fast enough.
2 – My crazy career path at linked in:

… The job stuff (TODO:NW move to another post)

Finally, there’s a point where you actually have to get started with a money-making gig. Once you get your first one, you may very well be set forever.

<from a recent email, probably not entirely useful>

1 – Go to ‘meet ups’, I know in other cities they are huge. SLC has a smaller scene, but people seem to think they are useful. I’ve almost gone to a couple and have been meaning to go. My server side language, php, isn’t as big, but the javascript I do (angular 2 and react), some mobile stuff, and server admin/dev ops have some interesting groups. Some guys I worked with last summer were into them, and said recruiters and people hiring will stick around after to look for people and often sponsor the meet ups. There are some javascript, angular 2, and react groups that could be good.
2 – Go nuts with responding to craigslist, ksl, and other job boards. While I lived in Austin last spring, I found a job post on a U of U job posting board for ‘data research’. It turned into me doing scripting (great for learning python) and doing a ton of work on an angular JS app when I got back to Utah. It was such a fun project. We worked for the Weinholtz campaign, and visualized voting data for Utah.
With hitting up postings it never hurts to have people look over your resume and form cover letters. To a certain degree, getting that first job will include having a resume where nothing is all that impressive for the industry. But wording and layout and how you paint past experience counts for something. With coding in particular, being a fast learner is huge.
3 – Take a job in something else, and as part of the deal offer to work for free/get training on a dev project. I had a campus job where we hired QA and copy writer people and let them do html and JS and get the experience and title for their resume and go other places. Unfortunately this is somewhat rare involves finding a good employer.

Using php with Angular 2 and Laravel

Posted on August - 11 - 2016

I’ve been using Angular 1.0 for the last few projects, but I wanted to get familiar with Angular 2. I found the process of getting up and running with php a bit annoying.

SUMMARY: With php, you can use the ‘5 Minute Quickstart’ tutorial google offers with some adjustments to asset locations, config paths, node packages, and automated tasks (most notably the typescript transpilation process) to get setup with Laravel and Angular 2. Google uses a node app, and a lite node server. I wanted to avoid this and use php and laravel to deploy assets, etc. and use Angular 2. Conceptually:

  1. Make sure you understand which packages you will need. The best would be just to copy the ones from the Angular 2 google tutorial, though not quite all of them will be used (we will let Laravel handle the typescript transpilation).
  2. Make sure you understand where the hell all the files should go. You need to change google’s root level file structure of the example node app to a php laravel app where public assets reside in the nested ‘public’ folder within the root app. Paths in the files need to be adjusted accordingly.
  3. Make sure you understand how to adjust the automated tasks in the gulp file, and in the laravel-elixir-typescript node package. This is mainly about how to get your typescript files to end up in the right place when they are compiled to javascript.


I’ve tried to explain my experience in a way that will make things easier for someone trying to setup any general php app with angular 2 at any point in the near future. The packages, versions, and syntax of the code may vary in the future, but if you understand what changes you need to make conceptually, it will hopefully be easy to use Google’s tutorials and documentation for the next while. My experience was based on the somewhat murky guide at The video is at I found I needed some clarifications on paths, etc. and that the packages used and versions were outdated or unexplained. I tweaked the tutorial found at I was using Laravel 5.2.43. I’ve tried to follow the steps in the google tutorial.

Step 1: Create and configure the Laravel and Angular 2 project

I took a laravel project and figured out where to put all the files from the google Angular 2 tutorial. Some files from the tutorial could just go in the root of the app. The following things need to be adjusted:

a) Create a laravel project, and change into that directory. Mine is called laravel_angular2.

laravel new laravel_angular2
cd laravel_angular2

I’ve gotten used to setting up a virtual host and getting the app and and running with php and apache. Hopefully you’ve already made it to this point.

b) Package definition and config files. Copy the typings.json file into the root of your project. Ignore the tsconfig.json file, as we will use laravel an gulp to handle typescript transpiling and serving the app. Put the systemjs.config.js file in the /public folder of laravel. UPDATE THE PATHS in this file to point to / instead of /node_modules, your public folder. The package.json file needs to be merged with existing laravel one. You can essentially copy the dependencies and devDependencies portions of the tutorial package.json file into the package.json file in the root of your laravel app. You can tweak the packages, perhaps remove the lite server and typescript stuff that google includes. As for me, I just left all the packages google indicated in there. I have yet to figure out a good way to determine, when cloning projects, how the hell to figure out which packages are unnecessary, and how to tell which should just go in the devDependencies section when the list gets real long and full of unfamiliar packages. The scripts and other sections of the tutorial package.json can be ignored since, again, we will use laravel an gulp to handle typescript transpiling and serving the app.

c) Install packages. npm install should work fine. Somehow I always screw the packages up and get error messages, but nothing helpful on how to avoid this comes to mind. Hopefully you only get some warnings. I could not get npm run typings install to work, as per the tutorial. I don’t know if this is because we removed it from the scripts section of the package.json file. You can try adding this section and run the command. I suspect it had more to do with the fact that the console wanted a cli package for typings. Do npm install typings --global to get the global cli package, then typings install. We need to deal with the typings folder and move it into public. I didn’t get errors in my quickstart app without moving the typings folder, but it should be moved to the public folder. So, move the ‘typings’ folder into /public. You could probably use a gulp task command to do this. You could also figure out how the hell to adjust google’s code to put this typings folder in public automatically. I never got to that point. The typings.json config file made no sense to me and I don’t know any of the conventions on using that package. I actually don’t even know what the typings folder is used for, but figured it should go in the public folder.

Ignore the ‘Helpful scripts’ section, as we will use laravel and gulp to run the typescript transpiling and the server.

Steps 2-4: Our first Angular 2 component, etc.

Instead of creating an ‘app’ root folder, make a folder ‘/resources/assets/typescript/app’ to house your typescript files. They will be transpiled to the /public/app folder later. Put all the files mentioned in the tutorial in steps 2-4 there.

Step 5.0: Making sure our Angular 2 typescript transpiles using Laravel and Gulp

I couldn’t get google’s typescript transpiler to work easily, and I wanted to use the one included in the laravel packages. The tricky part about this is to avoid jamming on the javascript into a single app.js file. There are 2 ways I’ve done this. The second way is demonstrated in the tutorial on (and the youtube video), but it involves modifying a node package in the node_modules folder, which is fragile. In my opinion.

  1. Use the gulp typescript library and make a task for transpiling the javascript from typescript.
    var ts = require('gulp-typescript');
    gulp.task('typescript', function(){
      var assetPath = './' + elixir.config.assetsPath;
      var search = '/typescript/**';
      var options = {
          // If you use ES5, see
          "target": "ES6",
          "module": "system",
          "moduleResolution": "node",
          "sourceMap": true,
          "emitDecoratorMetadata": true,
          "experimentalDecorators": true,
          "removeComments": false,
          "noImplicitAny": false,
      var outputFolder = 'public';
      return gulp.src(assetPath + search)
    ... //in the elixir function
        mix.task('typescript', 'resources/assets/typescript/**'); 
    • This is the weird custom step I found useful. It deals with getting the transpiling to work using the npm package elixir-typescript. This involves tweaking the package code, and will not be committed to the git app as it involves editing the node_modules folder. I don’t know if there is a better way to do this, but this idea comes from the aforementioned tutorial. Add the elixir-transcript package. Version 1.1.2: npm install elixir-typescript@1.1.2 I haven’t added this to the package.json file, as we are editing the package manually, but you could add it. Be aware that on certain npm actions, the required update that makes our transpiling work may be overwritten and break.
    • In the file /node_modules/elixir-typescript/index.js, comment out the concat line. //.pipe(concat(outputFileName)). This was on line 28 for me. This is so that when the typescript files are transpiled, they don’t all get mashed into one app.js file. Angular 2 would not like this. Every time you do npm install, you may have to redo this!
    • add the following to your gulp file. The first line was already there for me:

      var elixir = require('laravel-elixir');
      var elixirTypscript = require('elixir-typescript');
      elixir(function(mix) {
        // If you use ES5, see
        "target": "ES6",
        "module": "system",
        "moduleResolution": "node",
        "sourceMap": true,
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true,
        "removeComments": false,
        "noImplicitAny": false,

      Run gulp typescript to make sure it works. There should now be javascript files in /public/app. The paths here are weird and took me some tinkering to figure out. More on this later. These 3 steps were definitely the trickiest part of the whole process. It’s more about if you’re keen on learning how to use typescript with your php projects, as similar steps would probably need to be used in any tool that uses typescript on the frontend.

Step 5

Setup the welcome/index/homepage of your Laravel app to load the Angular 2 app. Instead of making an index.html as per the tutorial, update your home view, or whatever you want to run the angular 2 app, to use the html that is in the tutorial. We will need to modify the url paths since the packages used by Angular 2 reside in node_modules folder, which is not in /public. We also need to copy the used packages into the /public folder so we can reference them from the script tags. We will accomplish this by:

  1. Input the html, etc. from the tutorial. In my /resources/views/welcome.blade.php file I added Loading... in the content div. Copy the script tags from the tutorial into the head section.
  2. Add the following commands to the Laravel gulp file in the elixir function:

    mix.copy('node_modules/core-js', 'public/core-js');
    mix.copy('node_modules/reflect-metadata', 'public/reflect-metadata');
    mix.copy('node_modules/zone.js/dist/zone.js', 'public/zone.js/dist/zone.js');
    mix.copy('node_modules/systemjs', 'public/systemjs');

    The following 3 files are also necessary for the framework.

    mix.copy('node_modules/@angular', 'public/@angular');
    mix.copy('node_modules/angular2-in-memory-web-api', 'public/angular2-in-memory-web-api');
    mix.copy('node_modules/rxjs', 'public/rxjs');
  3. Remove references to node_modules from the script includes in your view file. For example, ‘node_modules/core-js’ should just be ‘core-js’. I edited my /resources/views/welcome.blade.php and removed all the ‘node_modules’ from the script tags. The systemjs.config.js file should already be in the public folder from Step 1. Again, note that you should remove reference to the ‘node_modules’ path in this systemjs.config.js.And with that, run gulp, hit your Laravel app’s url,and the quickstart app should load! You’ll probably want to add a gulp task to monitor changes to your typescript. And this is usually the point in a web post where I’m like ‘What the hell, this isn’t working and you seem to be missing some explanations.’ As I think of them, I’ll try and stay current and add in more explanations and pitfalls to avoid this.

Summary: What you want is to set up a null client. See my notes below for what constitutes an ’smtp server entry’.

So for my local dev environments I’ve been in the habit of setting up php’s mail function to work by doing a ’sudo apt-get install sendmail’ and editing the php.ini to point the sendmail_path to /usr/sbin/sendmail. (I’d recommend this procedure only if you’re working locally and don’t plan on opening port 25 to anything public, won’t want to mess with domain names, and won’t be dealing with mx records, etc.).

Well when I actually had to setup a server with a public domain, and needed emails to work efficiently from php with a qualified domain while worrying about port 25 being secure, I was in for some fun. For starters I’d recommend postfix over sendmail. Much easier to configure. Sendmail has many more config files, and you have to re-compile some of them after edits, etc. So once you have postfix installed, if you just want to send outgoing emails, then you can increase security and reduce overhead by making postfix not listen on the SMTP port. I wanted postfix as an smtp client only. With sendmail you can do this, and even kill all the daemons. With postfix you still need the daemon going, but when we’re done, nothing will be listening on port 25, smtp.

So what we are setting up is called a ‘null client’. You just have to modify /etc/postfix/ and master/cf according to the instructions at

The main thing that was unclear to me is what line(s) constitute a ‘SMTP server entry’ in After commenting out the line close to the top with ’service’ as ’smtp’ and ‘type’ as ‘inet’, I figured this was enough, as ’sudo lsof -i’ indicated nothing was listening on port 25, or as smtp. I would leave the other smtp service entries alone, the ones with ‘type’ as ‘unix’.

Then do ’sudo postfix reload’ and for good measure we may as well do ’sudo /etc/init.d/postfix restart’, or the equivalent on your linux distro.

PHP and MySQL setup on Mac OS X 10.5 Leopard

Posted on September - 10 - 2009

Full fledged open-source MAMP development environment with php, mysql, and apache on Mac OS X 10.5 Leopard

Goal: A complete php development environment using Mac OS X 10.5 Leopard’s out of the box apache2/php install, and an install of the latest mysql and eclipse software with all the necessary plugins for php debugging. ALL 64-BIT!

Admittedly, it was a challenge to get a fully functioning php dev environment up based on Mac OS X 10.5 Leopard’s configuration. But I succeeded in not installing a separate apache/php 32-bit install, or bailing out to use a linux Virtual Box.

Enabling PHP


This one was pretty easy. Just uncomment out the line

#LoadModule php5_module        libexec/apache2/

in the httpd.conf apache config (/etc/apache2/httpd.conf) so it includes the php5 module that comes with the OS.

Make sure your extension_dir in php.ini points to /usr/lib/php5/extensions/no-debug-non-zts-20060613/ or go nuts and do your own extension directory.

Debugging 64-bit

This was one of the trickier things. You need to get an X-Debug extension setup. Hopefully you can just use my 64-bit extension file, and put that in your extensions directory (/usr/lib/php5/extensions/no-debug-non-zts-20060613/). Then add the zend_extension directive to the php.ini, along with the X-Debug settings, pointing to your (local or remote) host. In your php.ini:

(left bracket)xdebug(right bracket)
xdebug.remote_enable=true  ; if debugging on remote server, put client IP here
xdebug.remote_handler=dbgp (specific to 64-bit Mac OS X)

If that short version doesn’t work, you need to compile a 64-bit extension from the xdebug source, which was sort of tricky. You’ll need to get a compiler installed on your Mac OS if you haven’t got the right developer tools installed (XCode from the install disk or mac;s website), and then follow the instructions in this article.

Installing MySQL

Use the installer from MySQL’s site, and it goes pretty seamlessly. You may have to edit the php.ini to use the mysql server.

The tricky part of this is if you use a framework, or your code uses the pdo database interface. Again, you can try my 64-bit version, or compile your own pdo_mysql extension. Enable the extension in the php.ini by addin gthe line (specific to 64-bit Mac OS X)


So there is a Cacao version that is 64-bit. I guess the difference here, as I’ve read online, is that the Carbon version is more stable, but also legacy and in the future will be deprecated soon.

I love using the update site to get plugins. That seemed to work best for PDT php, aptana, SVN (subclipse), and various editors, etc.


I sort of copped-out here when I learned the 64-bit version of eclipse doesn’t work well with Flex-Builder as an Eclipse plugin. I’m planning on installing the stand-alone version of flex builder, and using that separately (a little but resource wasteful, but far more convenient).