Translate

Tuesday, April 14, 2015

More on alignment idea

Yesterday had a lot of fun brainstorming ideas for a new direction in social media focused towards referral versus content generation, as most people don't wake up in the morning and type up a blog post, or write a bit of music, or a poem, or start a new video. Generating content can be hard, which can mean plenty of people can be sidelined in social media versus feeling like active participants, and there is a LOT of great content out there already, but how do you find it?

Consider this hypothesis: social media companies focused on referral would look vastly different from Facebook, Google+, Twitter or the current others.

Which to me is an opportunity to get creative, as I speculate on how one might make one, and I find myself thinking a lot about what I decided to call alignment, which is kind of like that which is used in video games, but has nothing to do with good or evil, but about characteristics with which a person would willingly publicly identify.

Why bother? Well, imagine a person gives an alignment as conservative, religious, family oriented, which is public, so anyone clicking on this person's profile sees that alignment. Someone else might have an alignment of liberal, spiritual, free thinker.

The alignments give a natural grouping with other people who align the same way.

That is, the system would automatically add you to natural groups made up of people who aligned the same way.

In addition people could come up with their own groups. Then you can have members of that group suggest content to each other, which presumably will fit with the alignment. So yeah, finally get to content referral! Which is the entire point of the exercise.

But here's the thing: the group would have alignments so would block people with anti-alignments from joining. So if you make a conservative group it would not allow a liberal to join. Or if you made a liberal group it would not allow a conservative to join.

Not all alignments have anti-alignments though.

If someone made a group without such an alignment it would not block them, so liberals and conservatives could join a group that didn't align against either of them.

Oh yeah, forgot to mention, people can join any group to which their alignments match.

So joining a group would be easy--as long as you matched alignments with it, or didn't have any excluding alignments. No one would pick people to join their groups. The system would handle it automatically, putting someone in any group they asked to join, as long as they matched alignments.

The system would also automatically remove a person from any group with any change in alignment that required it.

So why bother?

Well, people can be mean to each other, if you hadn't noticed.

Like, what about jokesters messing with people, or people who just lie?

Well, voluntarily, you could show your alignment history, which would show how long you held an alignment which the system holds, so let's say someone has been aligned a particular way for years with no changes. A jokester who changes her alignment to join his natural group, would either have to choose to not show her alignment history, which would be an option, or it would reveal several changes among alignments.

Obviously that would take time to build an alignment history, as the system isn't even built yet. But once built people would automatically build an alignment history, which would increasingly group people along stable lines, even if it showed them to be less stable than others as you can change your alignment any time you choose.

To further protect people, if you chose NOT to show your alignment publicly that could be possible, but that would tell something about you as well.

There's no way not to tell something about yourself in this system. If you don't want to tell people anything about yourself, you'd just stay away from one of these hypothetical social media referral companies, as the concept depends on that information.

The alignment history is the next big idea in this thing, which the concept allows, where people could choose not to show it, but that says one thing. Or they could choose to show it, which can show level of commitment to a particular point of view, where the system keeps people from being able to lie.

So you can be a blank slate, if you choose not to display any alignment. Or you can be very specific with lots of alignments which you've held for years, where the system vouches for that reality.

Some might be concerned about getting stuck into a box, but such people would probably shift their alignments at will! That is, a person who worries about being stuck in a box may be the kind of person who makes endless shifts in alignment. But what if that leads to others looking at you differently?

That's what it's supposed to do.

But also I'm talking about one hypothetical system, which could be like one social media company. There could be many of them, like there are many social media companies now.

The alignment history would be a way for your to build a public face over time, where the alignments are to be carefully chosen to avoid socially divisive things like race, sexual orientation, age, specific religion, specific political party, or any other thing which divides people in dangerous ways.

I brainstormed yesterday a twin test for a possible alignment, where you imagine two identical twins where one has the possible alignment, while one does not, like one twin is religious, while the other is not. Or one is a free thinker, while the other is not. Or one is liberal, while the other is conservative.

Something that would not pass the twin test for alignment is: intelligent.

If one twin wanted that label, then the other probably would as well, which makes it rather worthless as an alignment. So the alignments are looking for what I like to call split-points.

Still that's a judgement call, and debates over best alignments could be a hurdle.

But for now these are just ideas I'm tossing out open source. Helps me work through them, and if others see them as viable, maybe it could happen.

Groups of aligned people could possibly make choices on content to refer to each other which would be closer to interests, which I haven't mentioned, and less likely to anger, in a process which could be more like a community than current social media.

Or that's the idea. Just getting started thinking on it. This process is basic research at this point. Could take years to really get something useful. Or something could happen quickly.

A lot of the fun for me is just putting things out there.


James Harris

No comments: