It's been a year since I added a post. Because the term is ending, and because I have prospective PhD students contacting me about potentially working with me the following year, I have prioritized updating this site.
As an aside, while I identify closely with the Learning Sciences as a field, I believe there are a number of everyday learning experiences that we need to seriously examine. Don't get me wrong - there is still a lot of really interesting work coming out of LS. However, learning takes place in so many ways and at so many different moments that we have before us a near boundless task. Nowadays, we have people foraging for information on the internet but have not really tackled how that information discovery and learning takes place. (Connected Learning and Information Sciences have perhaps done some of the most relevant work here). While there is some research in developmental psychology, we have not done enough to look at learning as it takes place at home. (The LIFE center was an important spotlight for this area, but attention seems to have waned - although some outstanding scholars who have been influenced by work in this area are still continuing the work). We rarely consider learning on the job, even though Northwestern's Institute of Learning Sciences and Xerox PARCs Institute for Research on Learning were based on studying learning in work settings. (I know that there are occasional pieces that talk about it, but they are few and far between in the current LS literature).
I have been thinking on and off again about when life forces you to learn new things because of unexpected changes. I have begun to explore this with families that discover they have a family member diagnosed with type 1 diabetes. It is something I hope to write more about soon. I have one published paper and various conference pieces on that, with a journal article in preparation.
I have been thinking about proposing another book in the future too, but I really need to finish the book I have under contract on how libraries are changing and what dialogue could be developed between the information sciences and the learning sciences. That will get done soon.
Well, that's probably enough for another 12 months when I realize I have fallen way behind on keeping this blog and this website current.
I continually learn things about the academic job market. I have sat on both sides of the table and also sat at the bar listening to the experiences of people who have sat on either side of the table. (The academy is a very sitting-intensive place to work!)
Something has struck me recently about the academic job search. It's when an institution decides against an applicant under the auspices of "it's not that you aren't excellent - you just aren't a fit for us right now".
I'm not convinced that this statement always has truth to it. Many times people will gush about how great so-and-so is, but if the opportunity were to come to hire them, they pass. It isn't all situations, but there certainly seem to be enough specific cases I can think of when the sentiment is, "that person is great for where they come from, but not really quite good enough for us." I've seen this take place where the call for a position is quite open, so that has made it much harder to buy the "not the right fit" justification.
the upshot is that the academy is a prickly beast when it comes to hiring. But, at least one gets to spend a lot of time sitting in that business.
So, I am not great at blog maintenance. I have a bunch of stuff saved on my computer for things I wanted to write about but haven't including:
None of those things have happened. I'm off sabbatical. Life keeps me busy. Perhaps another time?
The title of this post is Victor Lee Utah CV because I expect those to be the likely search terms for someone looking for my CV. Instead, they get this blog post.
I am not a big fan of posting my CV. To be clear, if my job required it or highly expected it of all faculty, then I would go ahead and post it. However, if I can get out of posting it, then I will. Why? Here are a few reasons:
Again, I would totally make my CV public if I needed to based on what my current institution required. (My home address would disappear, though!) However, given the option, I'll just keep my CV to myself and share if asked. You are welcome to ask me for it, and I probably would share it, so long as you and I have met at some point, and I know you aren't trying to be an internet scammer.
I value humility. I'm not sure if it's a cultural thing, but I really like and respect the humble person. I like to think we all have similar anxieties and ambitions, we put our pants on one leg at a time, and we all read our smartphones while in the bathroom, etc. At the same time, I know I work with and around amazing people - amazing in the sense that they solve hard puzzles and ask fascinating questions or are so passionate about something that they go out and try to make things happen that make life better for people or other beings. In many ways, I am in such an amazing line of work to know and be inspired by these people.
At the same time, these people (and for now, I'm talking about academics), have an incentive to brag and boast. We are supposed to communicate value to our institutions and impress our colleagues and our students. It's part of the game. Being an academic is likened to being an entrepeneur - build a brand, make strategic partnerships, try to get investments, get key staff, and sell, sell, sell, sell. So we have to boast to make those things happen right?
I've been amazingly fortunate to have received some distinctions and awards in the past. I really do not know how to respond to them besides say 'thanks' uncomfortably after I hear congratulations. Frankly, it even makes me a bit uncomfortable to write those words, and I'm really not trying to #humblebrag. But I have noticed on social media things that look like flat out boasting, and I don't care for it. When Facebook releases its dislike button, I'm not going to click on it. I'll probably click like because I'm happy that someone got some nice recognition - that's great and I want to be supportive. But at the same time, I'd rather discover it accidentally and in person, slap them on the back and buy them a drink and comfortably embarrass them about being awesome.
So is boasting okay? In front of me? I think it's fine to boast that a grant got funded or a paper is accepted or finally published. Why? Those are collective challenges we all face, so we like to see one of our own succeed. We know the gauntlet that we must all run. It gives me hope to see someone make it on the other side. However, I don't really care to see other distinctions self-promoted. Exception: I think it's fine if it gets posted by someone other than the recipient - often universities use these as bragging things for alumni and such, and that ends up being in the public domain - so what happens to that is what happens to that. I think boasting about some touching thing that happened, like a student being so grateful that they leave a note that says you changed their life and their life direction in the best way possible is fine too. That is nice because it gives us something to aspire to and also reminds us of some of the good things we try to do and how we can have an impact.
That's just me. Maybe I am not meant for PR, although I do get asked to do PR things a lot. Maybe this is just my internal backlash to knowing what you have to do in academic PR. Or I'm just a really odd duck.
No, this isn't about me. If you know me, you likely know my partner/wife (I always preferred husband and wife as terminology rather than partner - I would like to refer to same sex couples' partners as their respective husband or wife, but I abide but whatever people request of me. Let's ignore the historical heterosexism and patriarchal roles that the words husband and wife entail for now). Anyway, my wife is great - way out of my league. People who meet or know her think she is sweet, beautiful, and just great. She's fairly quiet too. And they are right. I'm not just saying that because she'll read this. Honestly, I don't think she knows I maintain a blog because most of our conversations involve "who is picking up which kid? Did you go to the store or want me to stop by and get X?".
Anyway, as a friend commented on twitter a couple of months ago, he realized that he went past the 'go to lots of weddings' period of his life and had just finished the 'go to lots of baby showers' part. He observed he is now in the 'talk friends through their divorce' phase. I'm realizing that I am entering that as well. I am not going to bother counting the relationship endings I am seeing or have seen recently. It's a good number.
I have to admit, this is very new territory for me. I did not experience divorce in the family firsthand. That's not to say things were hunky dory, but I just haven't lived through that as personal experience. I had lots of friends as a child whose parents got divorced or were divorced. I understand it really can be the best thing for the family and respect it's a private and personal decision. I did see for many of those friends that there were some wounds that lingered from that, but again, I believe that what the family needed to do was what the family needed to do.
Things I struggle with:
Even when it isn't divorce but some sort of separation, I am at a loss. Sometimes it is a trial separation. that is weird to me already. It's like trying out divorce territory but not really, so I feel extra cautious about those 3 things. Sometimes it is a necessary separation (like a job takes one spouse one place). I see this in the academy a lot. That is hard for me to imagine doing, although I would not doubt that I may have to do that at some point in my career just to make sure kids can finish school years while I or my wife starts a new job somewhere else. It may not be playing with fire, but it seems like it can be like buying a box of matches.
But on the topic of the separation leading to divorce or all that - The whole thing makes me feel a little sad because I was often at the wedding or on a group date or at a baby shower or whatever. I remember they used to be happy. I remember the vows and the toasts and how beautiful the bride looked or how the groom's eyes lit up upon seeing his partner to be at the other end of the aisle. I don't cry at weddings. I appreciate them. Partly because open bars and fancy appetizers are awesome.
Anyway, a little food for thought I had that has been sitting for the past week. It's not really work related, but this is a blog. I guess it's what you do on blogs - write something out into the ether, and maybe a nice spam bot finds it.
I don't know who is going to find this blog besides spam-bots, but assuming some academic-y folks do, I thought I'd post on academic publishing. Specifically, I'll post on academic publishing in education (and my corner of it, which is learning sciences). I'm inspired partly because these were largely tacit rules I learned over time and also because I just saw this circulating: http://cacm.acm.org/magazines/2015/9/191173-should-conferences-meet-journals-and-where/fulltext
For good or ill, publishing is our bread and butter. It's how we are judged as being valuable as researchers (although if you keep generating big money grants and don't really publish much, I think your university or institution might see the value of having you). Let's ignore how the "publish or perish" mentality means too much stuff of "meh" quality gets written just so we can report large numbers of pubs to our institutions. It's a problem, but not one I want to talk about here.
In a nutshell, goodness of publication venue is a judgment we should defer to the experts in that academic community. They know what is good and why. There are journals that are highly ranked that have the boring same old same old and, frankly, doesn't advance our knowledge in a meaningful way. Also, I say this deference to people in the field is important because I hear time and again that fields like Computer Science are penalized at the university level because the publications expected in the CS community are conference proceedings but institutions want journal articles. Hey, CS conferences are no cakewalk! Their acceptance rates are tougher than a lot of journals. If I'm evaluating a job or tenure candidate's CV based on CS conferences, color me impressed if I see regular papers in well known and top conferences. My field is, I believe, getting to a point that conference proceedings papers have some real oomph to them. Conferences with published proceedings also have a quicker turnaround, so the inherent delays in the journal publishing system are mitigated. We actually learn what is going on in the field much sooner rather than reading really interesting stuff on data collected in 2005 (seriously, I'm pretty sure I just reviewed a journal manuscript like that.) (I think it was fine, but I have been reviewing a lot of papers and can't remember each and every one after I submit my review)
So here are my opinions, influenced by my field, of publication types - probably aimed toward junior faculty, and subject to dissent from others:
Authored Book: not worth it unless it is with a well known press, it will have wide distribution, and is on a specialized topic that doesn't have books and is something lots of people really care about even if they aren't academics. This is time sensitive - a topic that used to be in vogue but isn't now is not a good use of time. Gotta be ahead of the curve and gotta be right about being ahead of the curve! And it doesn't replace journal articles, or even published conference proceedings papers, IMO. In general, not worth the time for someone young, but there are exceptions.
Edited Book: icing on cake, but it isn't the cake. Mostly shows you can coordinate a big writing project. A lot of the respectability of the book rides on the topic, press, and the quality of contributors. (Note: I did an edited book and I am proud of it, but I think it's because I just wanted to do it and would not be stopped. It's definitely not for the money - I could make more money at a neighborhood poker night, and I am not a great poker player. I could make more money holding a yard sale. I could make more money if I got to search the couch cushions of every house on my street, etc)
well, those are my piecemeal thoughts. Someone who reads this is likely to disagree with me, but in 2015 and in the little corner of education research I live in, it's what I am noticing. I may update this all in the future. I am pretty sure the fonts are screwy so I should update it just to fix that.
Update: I know html but don't have the patience to dig around in the code and fix the formatting that this web service thing hides from me. So shifting fonts it is.
Some time ago, without us really noticing, some amazing colleagues and I were awarded a grant from the National Science Foundation Cyberlearning program to bring together some great minds to figure out new frameworks for connecting youth and data science with today's rapidly changing technology infrastructure. Short version: We think that youth are going to benefit from learning with and about data, but we need some updating of our ideas about what that will look like. It looks to be an exciting new opportunity and an excuse for me to work with really sharp people. What is especially cool about this is that the idea from this came from a roundtable brainstorm involving the four of us at a Cyberlearning meeting that was intended to foster collaborations and address needs in the emerging space of cyberlearning. Going to those meetings and putting heads together paid off!
Okay, back to what I was doing, which was probably some sort of work-related writing.
Last post was comparing some data on the Fitbit Charge HR and the Apple Watch with a little bit of Withings app iOS tracking in for good measure. Since that time, I have been relying primarily on the Apple Watch. Why? Because I paid much more for it. On top of that, I needed to use the Fitbit I was wearing for research purposes, and I try to be very cheap with my research expenses and use whatever equipment I have on hand (or on wrist). So the Fitbit Charge HR that I use most often is currently living elsewhere and hopefully generating interesting data for some of the research I do with kids (self promotion: see out of date publications page).
Also, the Fitbit stunk. Literally. I know this is a known issue, but my wrist and the device itself smelled bad. I stopped wearing it at night so it could 'air out'. I've tried cleaning it with rubbing alcohol with a cotton swab and various other things. The result: a bracelet that smells like alcohol and still slightly stinks. My father, to whom I had sent a Fitbit Charge for a past parent appreciation holiday, called me once to ask about how to deal with the smell.
I don't think it's too much to ask for a device that is easier to clean or more smell resistant. I was getting pretty tempted to start Febrezing that thing. I gave it a super duper aggressive cleaning and a period of no skin contact for cleaning before we got it circulating in research again, so hopefully the gross factor decreased some.
With all that said, I have to say that I really prefer the Fitbit for activity tracking. It's been striking to me how much less often I check my stats on my Apple Watch. We've ascertained from the last post that it does a reasonable job tracking, albeit it undercounts some. But consider the difference here and understand why I miss wearing the Fitbit.
To check steps on the Fitbit:
To check steps on the Apple Watch:
OR, I could have done this:
The number of steps involved are much higher in both Apple Watch scenarios. That's pretty annoying. Even one extra step is annoying, let along 3 or more. (Yes, I know being grateful for avoiding display shut off is not a step, but I am trying to work against the clock). People who think about user experience know even 1 or 2 extra steps can be a deal breaker. We just do not have the patience for those steps, and the opportunity for error goes up really quickly.
Don Norman, author of required reading in many design and HCI courses and current head of the UCSD Design Lab, wrote a little rant about how Apple lost its user experience way. He is right. This experience, while pretty to look at and neat to have possible, is pretty annoying compared to 'press the button once' on the Fitbit. But what do I know? And why do I stick with the Apple Watch? (oh, right, cause I paid a lot of money for it, and the Fitbit is being used by someone else for work, and it makes my wrist smell bad - market opportunity for teeny tiny sticks of deodorant, anyone?).
PS - I know there is a watch face that has a one touch access to the steps, calories, distance screen. The problem is that it is ugly, and it does not have jellyfish. This is the problem with the limited watch faces.
I'll consider posting about something besides a wearable doohickey soon. As a parting image, here is that eyeball monster from Altered Beast I was talking about.
For the past several weeks, I have been wearing an Apple Watch (nerd bling!), a Fitbit Charge HR, and not exactly wearing but still carrying my mobile phone like all the other technology addicted almost-middle aged people out there. The phone has the Withings app on it because I track my weight so that I can cycle back and forth between exuberance (Alright! I lost 5 pounds!) to bummed out (And I just gained 7 pounds. PBBBBBBT!!!!). The Withings App tracks steps based on the M7 chip inside of my iPhone 5S (why yes, I give Apple a lot of my money, thanks for asking). So I wanted to see how the three compared to each other. This is a casual comparison - I'm not running statistical tests or anything this time around.
After getting my Apple Watch, I decided to do this comparison thing. I was going to wear both the Fitbit and the Watch each day as I normally would. What's normal? When I wake up in the morning, I put on each device. The Watch goes on my non dominant (left) arm. The fitbit goes on my dominant (right) arm. I wasn't picky about which went on first - usually, it was whichever I could fumble and grab first. I decided some time ago (about when I discovered my wrist was getting sort of smelly from the fitbit charge) that I was not going to wear any devices at night nor track my sleep. (Yeah, bad self tracker, I know.) I set the recording of each device to the correct arm according to their corresponding apps. I carried my phone with me as I would normally - which means it goes with me to breakfast and goes with me when I walk the dog in the morning. When I get to work, it stays with me for the most part, but I will forget and leave it on my desk. At the gym, it sometimes goes with me onto a treadmill but it might also linger in the locker. If I am going to be going into water, like the pool or the beach or into the shower, I take off all devices and keep the phone away from water. For the most part, I went about my life.
Normal life involves walking the above mentioned dog, going on outings with the family, doing simple social activities with friends, and working at a job that involves a lot of sitting near a computer and cursing at the photocopier. I occasionally forget my phone for some reason, panic, and then gradually accept that I am phoneless for several hours. That is normal, and I did not track what days I left my phone somewhere. That is just life. I was trying to run an experiment that was fairly true to my life.
For days tracked, I picked the window between 6/3/2015 and 7/21/2015. Are these special days? An anniversary or obscure holiday? No. I got the watch on the 2nd, but it was late in the day so I didn't have a full day of data on that whereas the other devices/app had more time to be attached to my person. I actually wanted to go all the way through July, but my phone had been having issues and I had to get it replaced (hooray applecare! and yes, more money to Apple for applecare), so I lost Withings step data after the 21st. That's why we end then.
QS Labs was kind enough to release an iOS app called QS Access that let me get a .csv file of my Apple Health data. (Note: when I peak at the data in the health app, it seems that some of the data points, like a span of a minute or two, is from the phone rather than the watch? But I didn't care enough to dig into it and assume this is apple being smart about getting a more thorough picture. I'm sure I can read some message board and get a lot more details, but I'm just calling data from the health app "watch data" even though it isn't 100% true).
Fibit data I grabbed from the dashboard of Fitbit. I know there are hacks to grab data - in fact, I'm associated with one of them that will grab it in minute increments - but I just wanted to compare daily totals. Maybe one day in the future I will look at minute by minute or hour by hour or some other increment to see if there is something cool going on.
Withings lets you just export .csv files from their web interface, so that was easy enough. Just a few clicks here and there. Then stick it all into a spreadsheet and make a few plots. Again, I'm being lazy about this and am not running any serious statistics. I have had the most experience with Fitbit devices (I once tested the zip against the flex and saw the flex undercounted quite a bit relative to the zip), so I figured I'd use the fitbit devices as a baseline. Anyway, here are the results.
If you take into account my previous experience showing that the flex seemed to undercount relative to the hip based clip on zip (which is more similar to research grade pedometers that exercise science people use), then we might assume that the Charge HR undercounts relative to whatever is my true number of steps. (Also, published research suggests as much). No matter though - I'd rather have an undercount than overcount so that I push myself a little more. But what is interesting is that the Charge HR, assuming it was undercounting, was counting still more steps than the other two. See the poorly labeled graph that has not been cropped below.
The Apple Watch and the Charge track pretty close, but the green always seems to be a little higher. If you want to see how much of a deviation there was each day - in decimal approximations, then just keep scrolling.
So the conclusion here is that a Fitbit Charge HR on the dominant hand seems to count more steps than an Apple Watch on a non-dominant hand. Presumably, the algorithms that they use in each would account for dominant/non-dominant. And for the days that the counting was way off, I'm willing to believe something dumb happened that I don't remember, like a battery dying. (It happens.)
If you want to see it as being like a correlation, here's that plot.
If you ever took a statistics course, you probably covered correlation. You probably saw the pictures of magical correlation of +1 or -1 (everything lies perfectly on a line of slope +1 or -1) and a correlation of zero (which, if memory serves, was always like a perfect dot filled circle in the textbooks. But let's not start on that - there is a cottage industry of ranting against textbooks and I even did a chapter of my dissertation on that which eventually became a journal article.) Anyway, this has an upward line that looks +1ish, although there are some points hovering above the line (Apple undercounts relative to Fitbit). I didn't want to run any statistics, and a correlation barely counts because it's like a click, drag, and button click in your favorite spreadsheet program - so here it is. For these two, r = 0.81. That seems like a pretty high correlation. It's not 1, but getting a 1 is pretty darn hard.
Okay, how about the Withings? Again, relative to Fitbit's thing, we get the following deviation plot.
So it turns out that forgetting the phone at home or in the car or on my desk makes it a bad step counter. Who would have thunk?! But still, assuming I'm good about carrying my phone with me on the days that don't have the super sad droopy red bars, it's still a pretty big deviation.
Some time ago, there was a hubbub about some study about smart phones being reasonably good compared to a wearable tracker. Of course, the news media went to town with it and said wearable trackers are the worst thing ever and so take that you annoying techno-posers! But the study actually said that smart phones are a reasonable approximation when you have people doing something like....walking on a treadmill for a science experiment. If you don't have or don't want a wearable device but want to track, then by all means, use your phone! (Just don't forget it at home or in the car or on your desk). But if you want numbers that are a little...um, higher? Then a wearable that you don't have to think about leaving in the car or on your desk is fine. At least it works with this guy who has two thumbs and devices on each arm.
Correlation now? Here it is for the Fitbit vs. the phone app.
Well, that plot is sort of line like - it looks more like a line than a dotted circle. Heck, if I was a sociologist and got this as my plot, I'd probably start doing the Mipos Dance of Joy! (I'd also try to hide the fact that my N is so small). But doing that click, drag, button trick gives me this: r = 0.60.
That's not terrible. It is certainly a good approximation, and I am saying this without actually looking at any data but imagine this is probably close to the correlation of height and weight of american adults of a certain age. So it isn't bad - you can get reasonably close (if you remember to carry the darn smartphone everywhere, which we think we are good at, but I can't even tell you the number of times the words "Do you know where I put my phone?" are uttered each day in my house.)
What's that, you want one more correlation? For fun? Okay. The remaining pairwise correlation.
This last scatter plot looks sort of like a dragon? Probably because it is green, but I can kind of make out a neck and a tail. It seems the least pretty to me, but since Game of Thrones became a thing, my relationship to dragons has changed. I'd still be delighted if I were a sociologist (and ashamed of the N - let's not forget the shame). R = 0.82. So that's not bad if we use the Watch as our baseline? But like I said before, it seemed like the Watch might get some of its step counts from phone accelerometer motion, so that could be part of what's going on.
Well, I'm happy to use all three device options (especially because the Watch is useless without the phone) even though I'd get double the wristband tan. However, I've scaled down to one for now (the Watch - Apple nerd bling makes me care more about things that do not relate to data, like overpriced beauty). My hunch is that the Fitbit does a better job of getting closer to "REAL" steps, but we are a long way from any wrist-based or handheld device being able to getting to that level of accuracy. I could elaborate on that hunch, but I won't right now.
And I know there are a bunch of different analyses I could and should do - average counts, significance tests (okay fine, paired t-tests are all significant at the 0.01 level), weekends vs. weekdays, excluding some outlier days, etc.). But if you want my tl;dr (which you wouldn't know about because I put it at the bottom and if you dr'd it, you wouldn't know it's here), it is that phone apps don't seem as reliable or consistent compared to a wearable device when situated in the actual world of human use, as determined from this human's use. Beyond that, pick your poison. More steps can come out of the Fitbit wrist-worn device, which makes it an attractive option. That is, until you make and sell something with the Apple logo on it. Then all bets are off.
Oh, and my team published a case study of a tween girl who came to a similar conclusion that you can read here. And the whole question of accuracy is something that a bunch of elementary school kids explored too. We published that also, and one day it will be freely available.
Does the world need a blog from me? Probably not. But this site, which I have created so as to make sure that I have an enduring web presence (because my university is like most others and changes web platforms about every 18 months and makes content cumbersome to change) seems blog-appropriate. So, ta-da! Here is a blog.
I can't promise it will be updated regularly, and I can't promise that it will be very interesting. It will tend toward the professional side and focus on academia, education, and technology. My goal is to put out "interesting" stuff, and maybe someone will read it. At the very least, some spam-bot will find it and email me with offers of analytics services or invites to conferences that don't really exist. And maybe some people will start trolling me, making me think I must have made it big. Or it will just occupy unviewed bits of cyberspace and eventually disappear when I forget to renew the domain name, which will inevitably happen in the future.