Literature Review #1

Circumstellar disks around Herbig Be stars

T. Alonso-Albi et al., A&A

This paper presents the last of the data from a survey of Herbig Be stars using the Very Large Array (VLA) and the Plateau de Bure (PdB). The point of the survey was to investigate the properties of intermediate mass stars to determine the occurrence, lifetime, and evolution of the disks surrounding them.

Herbig Be, along with Herbig Ae, stars are pre-main sequence. Herbig Ae stars have strong infrared excess have disks that are similar to T Tauri stars. Herbig Be stars tend to be more modest their infrared excess and their disks are flatter than Herbig Ae stars. While no one is quite sure what causes this divergence in disk geometry T. Alonso-Albi et al. put forth the idea that Herbig Be stars lose a large portion of their mass before they hit the pre-main sequence phase.

T. Alonso-Albi et al. report the results of 6 objects studied in mm wavelengths. They chose mm wavelengths for their observations since optical-NIR and mid-IR observations are limited in only providing information about the disk surface and cannot give the disk mass. Also, observations at mm wavelengths allowed them to determine the size and properties of the large grains of dust that partially make up the disk.

The authors claim that a two component model is necessary to fit the SED  since the envelop surrounding the disk influenced so much of the observed flux. Out of the six objects observed, four were found to have disks. They found that the disk mass was usually < 10% of the entire envelope and 5-10 times lower than the disks around Herbig Ae stars.

They propose that photoevaporation is the cause of the dissipation of the disks. From what I understand their argument is that this happens with HAe stars as well, it’s just that the time scale is shorter. I’m not entirely clear why there would be such a significant difference. I’m actually fuzzy on physics of how these star/disk/envelope work on a detailed level. As I work on this project I think it will be necessary to learn more about the astrophysics in order to make any substantial and novel claims.

3 Measures of an Algorithm

I had a rather restless evening after I got off work today. All day today at work I had things running though my head that I wanted to work on after I got home. I can’t remember a time where I’ve been so interested in my own extracurricular activities today. I had moments today where I’d be thinking about my plans for my thesis, I’d be so absorbed that my brain would short circuit. I’d realize that I didn’t remember the last few minutes except for my thoughts.  Did I just use the washroom? I must have since my hands smell like soap. Did I flush the toilet?

Despite all this I stayed at work later than usual finish up some things. When I got home I couldn’t really focus on anything in particular. It was a combination of it being close to dinner time and having too many things that I could potentially work on. So I looked at funny pictures on the internet until dinner.

Joe wants to try all the food at a all the funky, super Hawaiian places around Hilo before we leave so we went down to Cafe 100 with Max. There was much gravy to be had.

When we got home I tried reading a paper about the disks of Herbig Be stars but I wasn’t feeling it tonight so I decided to try another python exercise. I’m still doing simple stuff but I had some fun with this one. The exercise was,

Given an array of names, print out a random pairing of names using every name exactly once. Handle the case where there is an odd number of names in the array.

I was pleased with how mindful I was about the things I’ve been learning. The algorithm itself came easily from thinking about what I would do if I was asked to do this with names on notecards. First, I wrote the program just handling even arrays and added the code that handles odd arrays after I already had that working. It’s pretty simple to tell that this was the case,

import random as r

names=["Alexa","Joe","Petra","Inger","Walder","Ricardo","Max"]

if len(names)%2 == 0:
    while len(names) > 0:
        i=r.randint(0,len(names)-1)
        first_name=names[i]
        del names[i]

        x=r.randint(0,len(names)-1)
        second_name=names[x]
        del names[x]

        print first_name, second_name

else:
    while len(names) > 1:
        i=r.randint(0,len(names)-1)
        first_name=names[i]
        del names[i]

        x=r.randint(0,len(names)-1)
        second_name=names[x]
        del names[x]

        print first_name, second_name
    else:
        print names[0]

It works though which is nice but I knew I could make a less bulky version of it. Which turns out to be,

while len(names) > 1:
    i=r.randint(0,len(names)-1)
    first_name=names[i]
    del names[i]

    x=r.randint(0,len(names)-1)
    second_name=names[x]
    del names[x]

    print first_name, second_name

if len(names) > 0:
    print names[0]

I was interested in objectively comparing these programs. There are three ways to measure the goodness of an algorithm:

1.) Correctness

2.) Size

3.) Speed

Well, they are both correct and it’s obvious who wins out over size but does the size of either program indicate anything about their corresponding speeds? To answer this question I’m going to start by saying that,

Number of Steps = Speed

Let’s start with the concise version. There are eight steps in each ‘while-loop’ and for the even case the ‘while-loop’ will run n/2 times, which we need to multiply by 8 since there are 7 steps in the loop and the comparison needs to happen. To account for the ‘if-statement’ we need to add 1 in the even case, since the comparison happens every time, and 2 in the odd when the statement is actually executed. So,

Concise Version:

Even: speed = 8\left(\frac{n}{2}\right)+1

Odd: speed = 8\left(\frac{\left(n-1\right)}{2}\right)+2

When I measured the same way for the original version I found out (surprise!) that it has the same speed as the the concise version. It’s trivial to show.

For the even case of the original version. The ‘if-statement’ runs every time the program is ran and the ‘while-loop’ is the exact same as the concise. So,

Original Version:

Even: 1+8\left(\frac{n}{2}\right)

For the odd case it’s the ‘if-statement’ + the ‘while-loop’ + the ‘else’ thus,

Original Version:

Odd: 1+8\left(\frac{\left(n-1\right)}{2}\right)+1\\    \\    = 8\left(\frac{\left(n-1\right)}{2}\right)+2

Frankly, I was more than a little surprised when I came up with this result. I was expecting the longer version to be much slower but such is life.

Sorting Algorithms

I’ve wanted to “be a programmer” for several years now but until recently I haven’t been committed enough to actually learn. I took some computer science courses but they were mostly from the hardware side, very fun though.

And, then, there was always other things to do.

Within the last month or so though I’ve been working on teaching myself python. I first fraternized with python when using the graphics library matplotlib for my QSO project. I struggled with making a contour plot using the library which shamed me into finally making an effort to learn how to tell my computer to do fancy tricks.

I’ve been doing fine with the most basic of basic stuff. Read in a file, sum an array, guess a random number, on and on…I didn’t really need to pause and think hard about what I was doing until I started getting into sorting algorithms.

It’s a very basic task. Take a list of names and put them in alphabetical order. So easy that I’ll eat pie as I do it. I decided to start off with an extra helping of challenge so I didn’t look up any sorting algorithms. I have never taken a computer science class, I was unsullied, I didn’t know what the common algorithms for sorting an unordered list are. It ended up being a difficult thing for me to get my head around (which is a difficult thing to admit since it seems like simple so I might delete this sentence later).

I gravitated to the idea of creating a new array that I would sort the names into which, from speaking to Joe, I learned is a common algorithm. But I just couldn’t get it to work. I struggled with getting the right compares to happen, then when I did get the elements compared properly I was at a loss on how to get the program to do the right thing with the elements to sort them. This is what I ended up with, it works but it’s completely not my idea,,

names=["Zooey","Joe","Ashela","Alexa","Walder","Adrianna"]
sorted_array=[]
x=0
while x < len(names):
    i=0
    while i < len(sorted_array) and names[x] > sorted_array[i]:
    i=i+1

sorted_array.insert(i,names[x])
x=x+1

print names
print sorted_array

The biggest insight I got from this was while loops are more than just things used to count. At my work at Gemini I only ever use while loops to keep track of lists of files. I thought I knew them well but my eyes are open now. In this example I’m using the inner while loop to keep track of two conditions; that ‘i’ is staying within the bounds of sorted_array and  what position names[x] should be put in. A name gets pulled out of the unsorted array and is compared to what’s already in the sorted array. Whenever names[x] comes after something that’s already sorted we push ‘i’ up until the second condition of the inner loop is not fulfilled, then ‘i’ marks the spot.

The only point I get on this was is that I was able to properly come up with the concept of “insert sort” but I sadly lacked the skills to actually come up with a correct algorithm and program it.

So I decided to try again with bubble sort.

Joe told me about the concept and I went for it in the hopes of regaining my honor. At first I came up with a seemingly correct but, on close inspection, clearly wrong algorithm. I did eventually get it right but let’s start with the specious example,

names=["Zooey","Joe","Ashela","Alexa","Walder","Adrianna"]

x=0
while x < len(names):
    for i in range(0,len(names)-1-x):
        print names[x], names[i]
        if names[i] > names[i+1]:
            names[i],names[i+1]=names[i+1],names[i]
x+=1

print names

“Bubble sort” gets it’s name because the sorted words are supposed to “bubble” up. Basically, if you compare and swap enough times you’ll get to a point where you don’t have to swap anymore. If you look at this for about 1 second (which is about the amount of thought I put into it) the above program looks good. Oh, the right compares are happening? Oh, and the for loop has some fancy business going on? It must do the right!

But, look. See how the comparison that I’m printing out has nothing to do with what I’m actually doing the array? And I seem to be up to my old tricks again with just using while loops to count things. This won’t do, here is the correct version of a Bubble sort.

swap=True

while swap:
    swap=False
    for x in range(0,len(names)-1):
        if names[x] > names[x+1]:
            names[x],names[x+1]=names[x+1],names[x]
            swap=True

print names

I’m now using the outer loop to keep track of whether any swaps had to happen in the previous inner loop. This ensures that the compares and swaps will keep happening until the list is ordered without trying to do fancy counting (I wonder if that’s a good rule of thumb? If you ever find yourself trying to be clever with counting you’re probably doing it wrong.).

It works and I feel good about it and I’ve written nearly 900 words about it. But I still want to sink my teeth deeper into sorting algorithms until I feel truly solid in my understanding of how to program them.

Thesis

I’m always eager for March to end. March is like the Wednesday of the academic year, it’s the middle of the winter holidays and the end of the spring semester. I still feel this way despite not being in school this year. I didn’t take a vacation during Christmas and New Year’s but my Gemini job ends April 30th and I have a blissfully empty summer schedule before I start my last year of school in the fall.

There are many things that I’m excited about this April. The second season of Game of Thrones, it’s no longer March, less than two months now until Joe and I leave Hawaii and go home. But mostly, I’m thrilled about starting to work on my senior thesis. I’m going back to Bennington for my final year after spending a year at UMass so I feel like I need to start early so I have a solid foundation for when I get back.

There is a new full time astronomy and physics professor at Bennington this year, Hugh, who will be my thesis advisor. I haven’t met him but we’ve emailed about the project that I’m interested in. It doesn’t have anything to do with his work which he’s fine with this as long as I can find an expert in the field who is willing to act as a second advisor.

Working at Gemini has been amazing in this regard. I have met so many people who are doing such awesome things in their field. I met Bernadette, who works in the south, at the Hilo Burger Joint during after-soccer drinks while she was visiting Hilo for a few days a couple months ago. Bernadette was the first author of the paper on RR Tau that we at Maria Mitchell based many of our assumptions on for the spectral monitoring program for UXors. 

Of course I had to talk to her. We had actually met the previous year at the AAS meeting in Seattle and she remembered me and my poster. She thought it was good! Over our beers and burgers I told her I wanted to expand on the results of the paper for my senior work, she said that she had high resolution Keck spectra of RR Tau that she never published and the first steps toward collaboration were made.

I sent her my paper the next day and just this week we finally got around to talking again. She’s back in Chile so we polycom’d and it was all very fancy. It ended up being a wonderfully productive conversation, I came in with an idea of what I want to do for my thesis and Bernadette was very helpful in pointing out papers and authors I should look into.

Instead of making you read my paper I’ll summarize it. UXors are a subclass of T Tauri stars. They experience pronounced and non periodic dimming in continuum light. Their mechanism of variability has been disputed for decades now. UXors have had a lot of photometric monitoring done on them but they have not been studied spectroscopically nearly as much. One of the advantages that Bernadette’s project is that she had spectroscopy.

While I was working at Maria Mitchell Observatory Vladimir, the director, developed a method of extracting the H-Alpha line from continuum, my role was to help the development and employment. The method involves two narrowband interference filters: one that is centered at the H-Alpha line and the other that sits in the continuum but overlaps with the H-Alpha lineInterference Filter Set-Up and basically subtracting the flux of the continuum filter from the H-Alpha filter. This meant that we could look at the temporal changes of the H-Alpha emission line and compare it to the evolution of continuum over time.

The insight that this led us to is that while the H-Alpha lives a life separate from continuum on short term time scales they correlate on long time scales (~100 days). This means that there is possibly a second mechanism of variability that has different effects than what causes the short term.

The green dotted lines show long term correlation between light curves.

So, what I want to do for my senior thesis is explore that second mechanism of variability. In Bernadette’s paper she and her co authors claim that the H-Alpha line is not correlated to photometric variability but they didn’t know about what was happening on longer time scales since they were just using spectra.

They also claim that the [O I] forbidden line is not affected by the changes in continuum but I feel that conclusion, like that of H-Alpha, suffers from not knowing about the second driver of variability. I want to see if I can associate the [O I] line with the long term variations. This would be significant because [O I] is indicative of stellar winds. If [O I] relates to long term continuum changes than that might mean that some wind is kicking up dust or the like and obscuring the star.

Bernadette and I talked about applying for telescope time through Gemini’s Poor Weather program. RR Tau is bright enough that we could get good spectra even when RR Tau is at a minimum and Poor Weather is an undersubscribed program so I’m likely to get time on the sky. I also want to email Vladimir about what he’s doing with RR Tau and UXors since I left.

I need to do a review of the literature before Bernadette and I talk again on April 9